Jan 20 01:32:40.841835 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:27:27 -00 2026 Jan 20 01:32:40.841879 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 01:32:40.841889 kernel: BIOS-provided physical RAM map: Jan 20 01:32:40.841913 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 01:32:40.841919 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 01:32:40.841925 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 01:32:40.841932 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 01:32:40.841939 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 01:32:40.841945 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 01:32:40.841951 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 01:32:40.841958 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 20 01:32:40.841980 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 01:32:40.841987 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 01:32:40.841993 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 01:32:40.842001 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 01:32:40.842008 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 01:32:40.842030 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 01:32:40.842037 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 01:32:40.842044 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 01:32:40.842050 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 01:32:40.842057 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 01:32:40.842064 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 01:32:40.842070 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 01:32:40.842077 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:32:40.842084 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 01:32:40.842090 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 01:32:40.842112 kernel: NX (Execute Disable) protection: active Jan 20 01:32:40.842144 kernel: APIC: Static calls initialized Jan 20 01:32:40.842151 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 20 01:32:40.842158 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 20 01:32:40.842165 kernel: extended physical RAM map: Jan 20 01:32:40.842186 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 01:32:40.842193 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 01:32:40.842199 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 01:32:40.842206 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 01:32:40.842213 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 01:32:40.842220 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 01:32:40.842246 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 01:32:40.842253 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 20 01:32:40.842272 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 20 01:32:40.842309 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 20 01:32:40.842345 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 20 01:32:40.842364 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 20 01:32:40.842372 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 01:32:40.842379 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 01:32:40.842386 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 01:32:40.842393 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 01:32:40.842400 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 01:32:40.842407 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 01:32:40.842414 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 01:32:40.842439 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 01:32:40.842447 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 01:32:40.842454 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 01:32:40.842461 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 01:32:40.842468 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 01:32:40.842476 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:32:40.842483 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 01:32:40.842490 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 01:32:40.842497 kernel: efi: EFI v2.7 by EDK II Jan 20 01:32:40.842504 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 20 01:32:40.842511 kernel: random: crng init done Jan 20 01:32:40.842573 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 20 01:32:40.842581 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 20 01:32:40.842588 kernel: secureboot: Secure boot disabled Jan 20 01:32:40.842595 kernel: SMBIOS 2.8 present. Jan 20 01:32:40.842602 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 20 01:32:40.842609 kernel: DMI: Memory slots populated: 1/1 Jan 20 01:32:40.842616 kernel: Hypervisor detected: KVM Jan 20 01:32:40.842624 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 01:32:40.842631 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 01:32:40.842638 kernel: kvm-clock: using sched offset of 7359898421 cycles Jan 20 01:32:40.842645 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 01:32:40.842670 kernel: tsc: Detected 2445.424 MHz processor Jan 20 01:32:40.842678 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 01:32:40.842685 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 01:32:40.842693 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 01:32:40.842700 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 01:32:40.842707 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 01:32:40.842715 kernel: Using GB pages for direct mapping Jan 20 01:32:40.842738 kernel: ACPI: Early table checksum verification disabled Jan 20 01:32:40.842746 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 01:32:40.842753 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 01:32:40.842761 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:32:40.842768 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:32:40.842776 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 01:32:40.842783 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:32:40.842790 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:32:40.842814 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:32:40.842821 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:32:40.842829 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 01:32:40.842836 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 01:32:40.842843 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 01:32:40.842851 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 01:32:40.842858 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 01:32:40.842881 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 01:32:40.842889 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 01:32:40.842896 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 01:32:40.842903 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 01:32:40.842910 kernel: No NUMA configuration found Jan 20 01:32:40.842918 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 20 01:32:40.842925 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 20 01:32:40.842949 kernel: Zone ranges: Jan 20 01:32:40.842957 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 01:32:40.842964 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 20 01:32:40.842971 kernel: Normal empty Jan 20 01:32:40.842978 kernel: Device empty Jan 20 01:32:40.842986 kernel: Movable zone start for each node Jan 20 01:32:40.842993 kernel: Early memory node ranges Jan 20 01:32:40.843000 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 01:32:40.843022 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 01:32:40.843030 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 01:32:40.843037 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 20 01:32:40.843044 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 20 01:32:40.843052 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 20 01:32:40.843059 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 20 01:32:40.843066 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 20 01:32:40.843073 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 20 01:32:40.843096 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:32:40.843171 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 01:32:40.843195 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 01:32:40.843203 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:32:40.843210 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 20 01:32:40.843218 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 20 01:32:40.843226 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 20 01:32:40.843233 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 20 01:32:40.843241 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 20 01:32:40.843265 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 01:32:40.843272 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 01:32:40.843280 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 01:32:40.843288 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 01:32:40.843311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 01:32:40.843319 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 01:32:40.843326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 01:32:40.843334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 01:32:40.843341 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 01:32:40.843349 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 01:32:40.843356 kernel: TSC deadline timer available Jan 20 01:32:40.843364 kernel: CPU topo: Max. logical packages: 1 Jan 20 01:32:40.843387 kernel: CPU topo: Max. logical dies: 1 Jan 20 01:32:40.843394 kernel: CPU topo: Max. dies per package: 1 Jan 20 01:32:40.843402 kernel: CPU topo: Max. threads per core: 1 Jan 20 01:32:40.843409 kernel: CPU topo: Num. cores per package: 4 Jan 20 01:32:40.843417 kernel: CPU topo: Num. threads per package: 4 Jan 20 01:32:40.843424 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 01:32:40.843432 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 01:32:40.843454 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 01:32:40.843462 kernel: kvm-guest: setup PV sched yield Jan 20 01:32:40.843470 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 20 01:32:40.843478 kernel: Booting paravirtualized kernel on KVM Jan 20 01:32:40.843485 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 01:32:40.843493 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 01:32:40.843501 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 01:32:40.843508 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 01:32:40.843948 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 01:32:40.843957 kernel: kvm-guest: PV spinlocks enabled Jan 20 01:32:40.843965 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 01:32:40.843974 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 01:32:40.843982 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:32:40.843990 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:32:40.844018 kernel: Fallback order for Node 0: 0 Jan 20 01:32:40.844026 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 20 01:32:40.844034 kernel: Policy zone: DMA32 Jan 20 01:32:40.844042 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:32:40.844049 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 01:32:40.844057 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 01:32:40.844065 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 01:32:40.844093 kernel: Dynamic Preempt: voluntary Jan 20 01:32:40.844101 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:32:40.844109 kernel: rcu: RCU event tracing is enabled. Jan 20 01:32:40.844118 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 01:32:40.844156 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:32:40.844164 kernel: Rude variant of Tasks RCU enabled. Jan 20 01:32:40.844171 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:32:40.844179 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:32:40.844205 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 01:32:40.844213 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:32:40.844221 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:32:40.844229 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:32:40.844237 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 01:32:40.844244 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:32:40.844252 kernel: Console: colour dummy device 80x25 Jan 20 01:32:40.844289 kernel: printk: legacy console [ttyS0] enabled Jan 20 01:32:40.844298 kernel: ACPI: Core revision 20240827 Jan 20 01:32:40.844305 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 01:32:40.844313 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 01:32:40.844321 kernel: x2apic enabled Jan 20 01:32:40.844329 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 01:32:40.844336 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 01:32:40.844344 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 01:32:40.844370 kernel: kvm-guest: setup PV IPIs Jan 20 01:32:40.844378 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 01:32:40.844386 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 20 01:32:40.844394 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 20 01:32:40.844402 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 01:32:40.844409 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 01:32:40.844417 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 01:32:40.844442 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 01:32:40.844449 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 01:32:40.844457 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 01:32:40.844465 kernel: Speculative Store Bypass: Vulnerable Jan 20 01:32:40.844472 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 01:32:40.844481 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 01:32:40.844504 kernel: active return thunk: srso_alias_return_thunk Jan 20 01:32:40.844512 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 01:32:40.844552 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 01:32:40.844560 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 01:32:40.844567 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 01:32:40.844575 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 01:32:40.844583 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 01:32:40.844613 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 01:32:40.844621 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 01:32:40.844629 kernel: Freeing SMP alternatives memory: 32K Jan 20 01:32:40.844637 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:32:40.844644 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 01:32:40.844652 kernel: landlock: Up and running. Jan 20 01:32:40.844659 kernel: SELinux: Initializing. Jan 20 01:32:40.844685 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:32:40.844692 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:32:40.844700 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 01:32:40.844708 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 01:32:40.844716 kernel: signal: max sigframe size: 1776 Jan 20 01:32:40.844723 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:32:40.844731 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:32:40.844755 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 01:32:40.844763 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 01:32:40.844771 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:32:40.844779 kernel: smpboot: x86: Booting SMP configuration: Jan 20 01:32:40.844786 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 01:32:40.844794 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 01:32:40.844801 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 20 01:32:40.844809 kernel: Memory: 2439052K/2565800K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 120812K reserved, 0K cma-reserved) Jan 20 01:32:40.844833 kernel: devtmpfs: initialized Jan 20 01:32:40.844840 kernel: x86/mm: Memory block size: 128MB Jan 20 01:32:40.844848 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 01:32:40.844856 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 01:32:40.844864 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 20 01:32:40.844871 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 01:32:40.844895 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 20 01:32:40.844903 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 01:32:40.844910 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:32:40.844918 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 01:32:40.844929 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:32:40.844936 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:32:40.844944 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:32:40.844967 kernel: audit: type=2000 audit(1768872756.773:1): state=initialized audit_enabled=0 res=1 Jan 20 01:32:40.844975 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:32:40.844983 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 01:32:40.844991 kernel: cpuidle: using governor menu Jan 20 01:32:40.844998 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:32:40.845006 kernel: dca service started, version 1.12.1 Jan 20 01:32:40.845013 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 20 01:32:40.845037 kernel: PCI: Using configuration type 1 for base access Jan 20 01:32:40.845045 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 01:32:40.845052 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:32:40.845060 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:32:40.845068 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:32:40.845075 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:32:40.845083 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:32:40.845090 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:32:40.845114 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:32:40.845145 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:32:40.845153 kernel: ACPI: Interpreter enabled Jan 20 01:32:40.845161 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 01:32:40.845168 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 01:32:40.845187 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 01:32:40.845195 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 01:32:40.845222 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 01:32:40.845230 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 01:32:40.845491 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 01:32:40.845731 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 01:32:40.845908 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 01:32:40.845947 kernel: PCI host bridge to bus 0000:00 Jan 20 01:32:40.846214 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 01:32:40.846376 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 01:32:40.847832 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 01:32:40.848647 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 20 01:32:40.849478 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 20 01:32:40.850470 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 20 01:32:40.852823 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 01:32:40.853166 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 01:32:40.853435 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 01:32:40.853765 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 20 01:32:40.854079 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 20 01:32:40.854379 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 20 01:32:40.854699 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 01:32:40.855110 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 01:32:40.855407 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 20 01:32:40.855716 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 20 01:32:40.856024 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 20 01:32:40.856329 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 01:32:40.856644 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 20 01:32:40.856923 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 20 01:32:40.857899 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 20 01:32:40.858110 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 01:32:40.858346 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 20 01:32:40.858590 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 20 01:32:40.858826 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 20 01:32:40.859054 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 20 01:32:40.859342 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 01:32:40.859560 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 01:32:40.859824 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 01:32:40.860050 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 20 01:32:40.860327 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 20 01:32:40.860677 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 01:32:40.860919 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 20 01:32:40.860974 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 01:32:40.860983 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 01:32:40.860991 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 01:32:40.860999 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 01:32:40.861007 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 01:32:40.861015 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 01:32:40.861023 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 01:32:40.861050 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 01:32:40.861058 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 01:32:40.861066 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 01:32:40.861074 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 01:32:40.861082 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 01:32:40.861090 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 01:32:40.861098 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 01:32:40.861149 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 01:32:40.861158 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 01:32:40.861166 kernel: iommu: Default domain type: Translated Jan 20 01:32:40.861174 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 01:32:40.861182 kernel: efivars: Registered efivars operations Jan 20 01:32:40.861190 kernel: PCI: Using ACPI for IRQ routing Jan 20 01:32:40.861197 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 01:32:40.861223 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 01:32:40.861231 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 20 01:32:40.861239 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 20 01:32:40.861246 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 20 01:32:40.861254 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 20 01:32:40.861262 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 20 01:32:40.861270 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 20 01:32:40.861294 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 20 01:32:40.861476 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 01:32:40.861734 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 01:32:40.861936 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 01:32:40.861949 kernel: vgaarb: loaded Jan 20 01:32:40.861957 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 01:32:40.861965 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 01:32:40.862006 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 01:32:40.862014 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:32:40.862023 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:32:40.862031 kernel: pnp: PnP ACPI init Jan 20 01:32:40.862256 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 20 01:32:40.862270 kernel: pnp: PnP ACPI: found 6 devices Jan 20 01:32:40.862305 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 01:32:40.862314 kernel: NET: Registered PF_INET protocol family Jan 20 01:32:40.862322 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:32:40.862329 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:32:40.862337 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:32:40.862345 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:32:40.862457 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:32:40.862563 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:32:40.862578 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:32:40.862591 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:32:40.862606 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:32:40.862617 kernel: NET: Registered PF_XDP protocol family Jan 20 01:32:40.862855 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 20 01:32:40.863050 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 20 01:32:40.863276 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 01:32:40.863436 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 01:32:40.863661 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 01:32:40.863848 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 20 01:32:40.864007 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 20 01:32:40.864197 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 20 01:32:40.864236 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:32:40.864246 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 20 01:32:40.864254 kernel: Initialise system trusted keyrings Jan 20 01:32:40.864262 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:32:40.864270 kernel: Key type asymmetric registered Jan 20 01:32:40.864278 kernel: Asymmetric key parser 'x509' registered Jan 20 01:32:40.864286 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 01:32:40.864311 kernel: io scheduler mq-deadline registered Jan 20 01:32:40.864319 kernel: io scheduler kyber registered Jan 20 01:32:40.864328 kernel: io scheduler bfq registered Jan 20 01:32:40.864335 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 01:32:40.864344 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 01:32:40.864353 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 01:32:40.864361 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 01:32:40.864386 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:32:40.864394 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 01:32:40.864403 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 01:32:40.864410 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 01:32:40.864419 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 01:32:40.864764 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 01:32:40.864780 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 01:32:40.865002 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 01:32:40.865266 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T01:32:38 UTC (1768872758) Jan 20 01:32:40.865508 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 20 01:32:40.865567 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 01:32:40.865603 kernel: efifb: probing for efifb Jan 20 01:32:40.865611 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 20 01:32:40.865620 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 20 01:32:40.865628 kernel: efifb: scrolling: redraw Jan 20 01:32:40.865636 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 01:32:40.865644 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 01:32:40.865652 kernel: fb0: EFI VGA frame buffer device Jan 20 01:32:40.865677 kernel: pstore: Using crash dump compression: deflate Jan 20 01:32:40.865686 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 01:32:40.865694 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:32:40.865702 kernel: Segment Routing with IPv6 Jan 20 01:32:40.865710 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:32:40.865718 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:32:40.865726 kernel: Key type dns_resolver registered Jan 20 01:32:40.865734 kernel: IPI shorthand broadcast: enabled Jan 20 01:32:40.865759 kernel: sched_clock: Marking stable (2383016731, 518753249)->(3073783337, -172013357) Jan 20 01:32:40.865767 kernel: registered taskstats version 1 Jan 20 01:32:40.865775 kernel: Loading compiled-in X.509 certificates Jan 20 01:32:40.865783 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 39f154fc6e329874bced8cdae9473f98b7dd3f43' Jan 20 01:32:40.865791 kernel: Demotion targets for Node 0: null Jan 20 01:32:40.865800 kernel: Key type .fscrypt registered Jan 20 01:32:40.865808 kernel: Key type fscrypt-provisioning registered Jan 20 01:32:40.865832 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:32:40.865840 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:32:40.865848 kernel: ima: No architecture policies found Jan 20 01:32:40.865856 kernel: clk: Disabling unused clocks Jan 20 01:32:40.865864 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 20 01:32:40.865872 kernel: Write protecting the kernel read-only data: 47104k Jan 20 01:32:40.865880 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 20 01:32:40.865904 kernel: Run /init as init process Jan 20 01:32:40.865912 kernel: with arguments: Jan 20 01:32:40.865920 kernel: /init Jan 20 01:32:40.865928 kernel: with environment: Jan 20 01:32:40.865936 kernel: HOME=/ Jan 20 01:32:40.865944 kernel: TERM=linux Jan 20 01:32:40.865952 kernel: SCSI subsystem initialized Jan 20 01:32:40.865975 kernel: libata version 3.00 loaded. Jan 20 01:32:40.866251 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 01:32:40.866265 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 01:32:40.866434 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 01:32:40.866645 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 01:32:40.866861 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 01:32:40.867174 kernel: scsi host0: ahci Jan 20 01:32:40.867394 kernel: scsi host1: ahci Jan 20 01:32:40.867629 kernel: scsi host2: ahci Jan 20 01:32:40.867881 kernel: scsi host3: ahci Jan 20 01:32:40.868167 kernel: scsi host4: ahci Jan 20 01:32:40.868379 kernel: scsi host5: ahci Jan 20 01:32:40.868423 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 20 01:32:40.868432 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 20 01:32:40.868440 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 20 01:32:40.868449 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 20 01:32:40.868457 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 20 01:32:40.868465 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 20 01:32:40.868492 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 01:32:40.868500 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 01:32:40.868508 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 01:32:40.868548 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 01:32:40.868557 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 01:32:40.868565 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 01:32:40.868574 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:32:40.868602 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 01:32:40.868617 kernel: ata3.00: applying bridge limits Jan 20 01:32:40.868632 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:32:40.868677 kernel: ata3.00: configured for UDMA/100 Jan 20 01:32:40.868954 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 01:32:40.869175 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 01:32:40.869348 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 20 01:32:40.869386 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 01:32:40.869395 kernel: GPT:16515071 != 27000831 Jan 20 01:32:40.869404 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 01:32:40.869412 kernel: GPT:16515071 != 27000831 Jan 20 01:32:40.869420 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 01:32:40.869428 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:32:40.869705 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 01:32:40.869724 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:32:40.869958 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 01:32:40.869972 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:32:40.869981 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:32:40.870022 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 01:32:40.870031 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 01:32:40.870058 kernel: raid6: avx2x4 gen() 22029 MB/s Jan 20 01:32:40.870082 kernel: raid6: avx2x2 gen() 22497 MB/s Jan 20 01:32:40.870091 kernel: raid6: avx2x1 gen() 24423 MB/s Jan 20 01:32:40.870099 kernel: raid6: using algorithm avx2x1 gen() 24423 MB/s Jan 20 01:32:40.870107 kernel: raid6: .... xor() 24004 MB/s, rmw enabled Jan 20 01:32:40.870115 kernel: raid6: using avx2x2 recovery algorithm Jan 20 01:32:40.870147 kernel: xor: automatically using best checksumming function avx Jan 20 01:32:40.870173 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:32:40.870182 kernel: BTRFS: device fsid 95a8358a-4aa8-4215-9cd3-5b140c6c0a16 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (181) Jan 20 01:32:40.870191 kernel: BTRFS info (device dm-0): first mount of filesystem 95a8358a-4aa8-4215-9cd3-5b140c6c0a16 Jan 20 01:32:40.870199 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:32:40.870207 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:32:40.870215 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 01:32:40.870223 kernel: loop: module loaded Jan 20 01:32:40.870248 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 01:32:40.870256 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:32:40.870265 systemd[1]: Successfully made /usr/ read-only. Jan 20 01:32:40.870276 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:32:40.870286 systemd[1]: Detected virtualization kvm. Jan 20 01:32:40.870294 systemd[1]: Detected architecture x86-64. Jan 20 01:32:40.870319 systemd[1]: Running in initrd. Jan 20 01:32:40.870327 systemd[1]: No hostname configured, using default hostname. Jan 20 01:32:40.870336 systemd[1]: Hostname set to . Jan 20 01:32:40.870344 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 01:32:40.870352 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:32:40.870361 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:32:40.870369 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:32:40.870394 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:32:40.870404 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:32:40.870412 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:32:40.870421 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:32:40.870430 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:32:40.870465 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:32:40.870481 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:32:40.870493 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:32:40.870505 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:32:40.870562 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:32:40.870579 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:32:40.870592 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:32:40.870636 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:32:40.870648 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:32:40.870660 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 01:32:40.870675 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:32:40.870689 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 01:32:40.870701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:32:40.870713 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:32:40.870756 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:32:40.870802 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:32:40.870816 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:32:40.870827 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:32:40.870839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:32:40.870851 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:32:40.870931 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 01:32:40.870946 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:32:40.870960 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:32:40.870974 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:32:40.870990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:32:40.871045 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:32:40.871061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:32:40.871077 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:32:40.871093 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:32:40.871187 systemd-journald[317]: Collecting audit messages is enabled. Jan 20 01:32:40.871257 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:32:40.871270 kernel: Bridge firewalling registered Jan 20 01:32:40.871283 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:32:40.871295 systemd-journald[317]: Journal started Jan 20 01:32:40.871344 systemd-journald[317]: Runtime Journal (/run/log/journal/f588fff956fa47abb41e0889fc674905) is 6M, max 48M, 42M free. Jan 20 01:32:40.870851 systemd-modules-load[319]: Inserted module 'br_netfilter' Jan 20 01:32:40.888738 kernel: audit: type=1130 audit(1768872760.875:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:40.888771 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:32:40.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:40.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:40.893091 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:32:40.900314 kernel: audit: type=1130 audit(1768872760.891:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:40.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:40.913848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:32:40.927748 kernel: audit: type=1130 audit(1768872760.905:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:40.927788 kernel: audit: type=1130 audit(1768872760.914:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:40.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:40.931320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:32:40.933368 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:32:40.940262 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:32:40.952862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:32:40.973709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:32:40.976216 systemd-tmpfiles[340]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 01:32:40.989820 kernel: audit: type=1130 audit(1768872760.978:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:40.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:40.979072 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:32:40.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:41.000244 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:32:41.017763 kernel: audit: type=1130 audit(1768872760.990:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:41.017810 kernel: audit: type=1130 audit(1768872761.010:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:41.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:41.017576 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:32:41.127165 kernel: audit: type=1130 audit(1768872761.030:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:41.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:41.145049 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:32:41.157864 kernel: audit: type=1334 audit(1768872761.151:10): prog-id=6 op=LOAD Jan 20 01:32:41.151000 audit: BPF prog-id=6 op=LOAD Jan 20 01:32:41.157932 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:32:41.266483 dracut-cmdline[355]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 01:32:41.368835 systemd-resolved[356]: Positive Trust Anchors: Jan 20 01:32:41.368916 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:32:41.368922 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 01:32:41.368969 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:32:41.441326 systemd-resolved[356]: Defaulting to hostname 'linux'. Jan 20 01:32:41.445011 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:32:41.451100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:32:41.471784 kernel: audit: type=1130 audit(1768872761.450:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:41.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:41.747216 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:32:41.793027 kernel: iscsi: registered transport (tcp) Jan 20 01:32:41.849868 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:32:41.850423 kernel: QLogic iSCSI HBA Driver Jan 20 01:32:42.141350 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:32:42.227699 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:32:42.246583 kernel: audit: type=1130 audit(1768872762.230:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.234230 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:32:42.314734 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:32:42.331745 kernel: audit: type=1130 audit(1768872762.317:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.320163 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:32:42.339447 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:32:42.373589 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:32:42.392502 kernel: audit: type=1130 audit(1768872762.374:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.392593 kernel: audit: type=1334 audit(1768872762.385:15): prog-id=7 op=LOAD Jan 20 01:32:42.392612 kernel: audit: type=1334 audit(1768872762.385:16): prog-id=8 op=LOAD Jan 20 01:32:42.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.385000 audit: BPF prog-id=7 op=LOAD Jan 20 01:32:42.385000 audit: BPF prog-id=8 op=LOAD Jan 20 01:32:42.388943 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:32:42.490190 systemd-udevd[581]: Using default interface naming scheme 'v257'. Jan 20 01:32:42.535058 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:32:42.558744 kernel: audit: type=1130 audit(1768872762.539:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.552743 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:32:42.620830 dracut-pre-trigger[653]: rd.md=0: removing MD RAID activation Jan 20 01:32:42.671086 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:32:42.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.684685 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:32:42.693069 kernel: audit: type=1130 audit(1768872762.680:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.693109 kernel: audit: type=1334 audit(1768872762.680:19): prog-id=9 op=LOAD Jan 20 01:32:42.680000 audit: BPF prog-id=9 op=LOAD Jan 20 01:32:42.697403 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:32:42.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.701221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:32:42.710191 kernel: audit: type=1130 audit(1768872762.698:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.788778 systemd-networkd[727]: lo: Link UP Jan 20 01:32:42.788787 systemd-networkd[727]: lo: Gained carrier Jan 20 01:32:42.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.789821 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:32:42.791359 systemd[1]: Reached target network.target - Network. Jan 20 01:32:42.876189 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:32:42.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:42.884661 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:32:42.980908 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 01:32:42.999201 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 01:32:43.808566 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 01:32:43.839786 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 01:32:43.852602 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 01:32:43.855559 kernel: AES CTR mode by8 optimization enabled Jan 20 01:32:43.860185 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:32:43.870064 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:32:43.871370 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:32:43.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:43.873770 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:32:43.876377 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:32:43.876408 systemd-networkd[727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:32:43.878221 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:32:43.879926 systemd-networkd[727]: eth0: Link UP Jan 20 01:32:43.881189 systemd-networkd[727]: eth0: Gained carrier Jan 20 01:32:43.881201 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:32:43.907931 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:32:43.910700 systemd-networkd[727]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:32:43.947801 disk-uuid[831]: Primary Header is updated. Jan 20 01:32:43.947801 disk-uuid[831]: Secondary Entries is updated. Jan 20 01:32:43.947801 disk-uuid[831]: Secondary Header is updated. Jan 20 01:32:43.966927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:32:43.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:44.061392 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:32:44.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:44.066052 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:32:44.072731 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:32:44.076583 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:32:44.081398 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:32:44.131747 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:32:44.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:45.010426 disk-uuid[840]: Warning: The kernel is still using the old partition table. Jan 20 01:32:45.010426 disk-uuid[840]: The new table will be used at the next reboot or after you Jan 20 01:32:45.010426 disk-uuid[840]: run partprobe(8) or kpartx(8) Jan 20 01:32:45.010426 disk-uuid[840]: The operation has completed successfully. Jan 20 01:32:45.034985 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:32:45.035219 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:32:45.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:45.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:45.043873 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:32:45.088634 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (864) Jan 20 01:32:45.094107 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 01:32:45.094210 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:32:45.100953 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:32:45.100993 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:32:45.110590 kernel: BTRFS info (device vda6): last unmount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 01:32:45.112617 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:32:45.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:45.119646 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:32:46.126651 systemd-networkd[727]: eth0: Gained IPv6LL Jan 20 01:32:46.180641 ignition[883]: Ignition 2.24.0 Jan 20 01:32:46.180677 ignition[883]: Stage: fetch-offline Jan 20 01:32:46.180893 ignition[883]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:32:46.180916 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:32:46.181575 ignition[883]: parsed url from cmdline: "" Jan 20 01:32:46.181581 ignition[883]: no config URL provided Jan 20 01:32:46.181592 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:32:46.181611 ignition[883]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:32:46.181748 ignition[883]: op(1): [started] loading QEMU firmware config module Jan 20 01:32:46.181756 ignition[883]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 01:32:46.200180 ignition[883]: op(1): [finished] loading QEMU firmware config module Jan 20 01:32:46.379651 ignition[883]: parsing config with SHA512: e25e53755d4b0f47aca17c87d8bd830fec17f889df7bfef50e3547a7b33f894d715bf5f639438ed9a0f21c48a0a75a5a8a747aed7b83738c243d9a8c93292122 Jan 20 01:32:46.399575 unknown[883]: fetched base config from "system" Jan 20 01:32:46.399610 unknown[883]: fetched user config from "qemu" Jan 20 01:32:46.404687 ignition[883]: fetch-offline: fetch-offline passed Jan 20 01:32:46.406992 ignition[883]: Ignition finished successfully Jan 20 01:32:46.410599 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:32:46.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:46.417110 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 01:32:46.423173 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:32:46.776493 ignition[893]: Ignition 2.24.0 Jan 20 01:32:46.776577 ignition[893]: Stage: kargs Jan 20 01:32:46.776879 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:32:46.776891 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:32:46.786634 ignition[893]: kargs: kargs passed Jan 20 01:32:46.786705 ignition[893]: Ignition finished successfully Jan 20 01:32:46.793746 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:32:46.810381 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 20 01:32:46.810426 kernel: audit: type=1130 audit(1768872766.796:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:46.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:46.799668 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:32:47.018340 ignition[901]: Ignition 2.24.0 Jan 20 01:32:47.018367 ignition[901]: Stage: disks Jan 20 01:32:47.018600 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:32:47.018616 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:32:47.019833 ignition[901]: disks: disks passed Jan 20 01:32:47.019885 ignition[901]: Ignition finished successfully Jan 20 01:32:47.035899 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:32:47.050223 kernel: audit: type=1130 audit(1768872767.036:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:47.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:47.037730 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:32:47.054293 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:32:47.055567 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:32:47.063627 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:32:47.075950 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:32:47.085225 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:32:47.147667 systemd-fsck[912]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 01:32:47.154330 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:32:47.171078 kernel: audit: type=1130 audit(1768872767.155:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:47.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:47.157629 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:32:47.314641 kernel: EXT4-fs (vda9): mounted filesystem 452c2147-bc43-4f48-ad5f-dc139dd95c0b r/w with ordered data mode. Quota mode: none. Jan 20 01:32:47.316107 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:32:47.317962 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:32:47.328976 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:32:47.332717 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:32:47.337742 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 01:32:47.337798 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:32:47.337831 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:32:47.366682 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:32:47.385796 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Jan 20 01:32:47.385830 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 01:32:47.385891 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:32:47.371562 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:32:47.394025 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:32:47.394044 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:32:47.396127 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:32:47.612000 kernel: hrtimer: interrupt took 3241875 ns Jan 20 01:32:47.927438 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:32:47.943907 kernel: audit: type=1130 audit(1768872767.929:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:47.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:47.931934 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:32:47.954706 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:32:47.971483 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:32:47.976319 kernel: BTRFS info (device vda6): last unmount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 01:32:48.014462 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:32:48.027400 kernel: audit: type=1130 audit(1768872768.016:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:48.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:48.073675 ignition[1018]: INFO : Ignition 2.24.0 Jan 20 01:32:48.073675 ignition[1018]: INFO : Stage: mount Jan 20 01:32:48.078474 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:32:48.078474 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:32:48.085061 ignition[1018]: INFO : mount: mount passed Jan 20 01:32:48.085061 ignition[1018]: INFO : Ignition finished successfully Jan 20 01:32:48.091866 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:32:48.102172 kernel: audit: type=1130 audit(1768872768.094:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:48.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:48.096264 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:32:48.317858 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:32:48.363067 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1031) Jan 20 01:32:48.363118 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 01:32:48.365555 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:32:48.373412 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:32:48.373463 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:32:48.376262 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:32:48.444922 ignition[1048]: INFO : Ignition 2.24.0 Jan 20 01:32:48.444922 ignition[1048]: INFO : Stage: files Jan 20 01:32:48.449268 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:32:48.449268 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:32:48.449268 ignition[1048]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:32:48.449268 ignition[1048]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:32:48.449268 ignition[1048]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:32:48.468073 ignition[1048]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:32:48.468073 ignition[1048]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:32:48.477216 unknown[1048]: wrote ssh authorized keys file for user: core Jan 20 01:32:48.480790 ignition[1048]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:32:48.485342 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 01:32:48.485342 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 01:32:48.543487 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:32:48.655110 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 01:32:48.662785 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:32:48.662785 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:32:48.662785 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:32:48.662785 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:32:48.662785 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:32:48.662785 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:32:48.662785 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:32:48.662785 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:32:48.662785 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:32:48.662785 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:32:48.662785 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:32:48.728859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:32:48.728859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:32:48.728859 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 01:32:49.047040 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:32:50.730367 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:32:50.730367 ignition[1048]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:32:50.740930 ignition[1048]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:32:50.747586 ignition[1048]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:32:50.747586 ignition[1048]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:32:50.747586 ignition[1048]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 01:32:50.747586 ignition[1048]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:32:50.747586 ignition[1048]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:32:50.747586 ignition[1048]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 01:32:50.747586 ignition[1048]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 01:32:50.796013 ignition[1048]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:32:50.808061 ignition[1048]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:32:50.812883 ignition[1048]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 01:32:50.812883 ignition[1048]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:32:50.812883 ignition[1048]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:32:50.812883 ignition[1048]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:32:50.812883 ignition[1048]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:32:50.812883 ignition[1048]: INFO : files: files passed Jan 20 01:32:50.812883 ignition[1048]: INFO : Ignition finished successfully Jan 20 01:32:50.850584 kernel: audit: type=1130 audit(1768872770.823:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:50.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:50.822763 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:32:50.826315 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:32:50.861663 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:32:50.866681 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:32:50.887544 kernel: audit: type=1130 audit(1768872770.869:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:50.887571 kernel: audit: type=1131 audit(1768872770.869:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:50.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:50.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:50.866811 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:32:50.896196 initrd-setup-root-after-ignition[1080]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 01:32:50.904986 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:32:50.904986 initrd-setup-root-after-ignition[1082]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:32:50.915572 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:32:50.922094 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:32:50.937768 kernel: audit: type=1130 audit(1768872770.922:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:50.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:50.923770 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:32:50.944968 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:32:51.367234 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:32:51.370643 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:32:51.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.378693 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:32:51.385626 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:32:51.392271 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:32:51.393856 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:32:51.440695 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:32:51.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.448657 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:32:51.482734 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:32:51.482957 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:32:51.488931 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:32:51.494857 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:32:51.499989 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:32:51.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.500229 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:32:51.508570 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:32:51.514011 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:32:51.518762 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:32:51.523767 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:32:51.529280 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:32:51.534942 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:32:51.540649 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:32:51.545935 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:32:51.551862 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:32:51.557481 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:32:51.562609 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:32:51.571295 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:32:51.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.571433 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:32:51.579300 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:32:51.585506 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:32:51.587390 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:32:51.587633 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:32:51.599368 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:32:51.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.599480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:32:51.615369 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:32:51.615577 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:32:51.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.623218 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:32:51.627925 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:32:51.631590 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:32:51.637220 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:32:51.642879 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:32:51.649364 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:32:51.649458 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:32:51.651226 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:32:51.651368 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:32:51.663389 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 01:32:51.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.663487 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 01:32:51.670192 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:32:51.670381 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:32:51.675019 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:32:51.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.675224 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:32:51.682331 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:32:51.684069 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:32:51.684282 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:32:51.692039 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:32:51.715396 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:32:51.715751 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:32:51.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.721511 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:32:51.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.721771 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:32:51.730390 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:32:51.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.733565 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:32:51.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.742680 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:32:51.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.742806 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:32:51.748618 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:32:51.783185 ignition[1106]: INFO : Ignition 2.24.0 Jan 20 01:32:51.783185 ignition[1106]: INFO : Stage: umount Jan 20 01:32:51.788207 ignition[1106]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:32:51.788207 ignition[1106]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:32:51.797632 ignition[1106]: INFO : umount: umount passed Jan 20 01:32:51.800333 ignition[1106]: INFO : Ignition finished successfully Jan 20 01:32:51.804916 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:32:51.805107 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:32:51.822469 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 20 01:32:51.822561 kernel: audit: type=1131 audit(1768872771.806:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.807856 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:32:51.833305 kernel: audit: type=1131 audit(1768872771.823:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.807982 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:32:51.825052 systemd[1]: Stopped target network.target - Network. Jan 20 01:32:51.850056 kernel: audit: type=1131 audit(1768872771.838:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.834322 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:32:51.863480 kernel: audit: type=1131 audit(1768872771.852:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.834388 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:32:51.887188 kernel: audit: type=1131 audit(1768872771.867:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.887255 kernel: audit: type=1131 audit(1768872771.876:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.839406 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:32:51.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.895657 kernel: audit: type=1131 audit(1768872771.888:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.839461 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:32:51.852865 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:32:51.852931 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:32:51.868375 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:32:51.924692 kernel: audit: type=1131 audit(1768872771.911:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.868436 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:32:51.877337 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:32:51.877400 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:32:51.938390 kernel: audit: type=1334 audit(1768872771.932:64): prog-id=6 op=UNLOAD Jan 20 01:32:51.932000 audit: BPF prog-id=6 op=UNLOAD Jan 20 01:32:51.888469 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:32:51.896469 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:32:51.909067 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:32:51.957381 kernel: audit: type=1131 audit(1768872771.942:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.909314 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:32:51.942362 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:32:51.942567 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:32:51.964790 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 01:32:51.972000 audit: BPF prog-id=9 op=UNLOAD Jan 20 01:32:51.972727 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:32:51.972794 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:32:51.983829 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:32:51.985003 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:32:51.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.985101 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:32:51.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.990278 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:32:52.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:51.990336 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:32:51.995235 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:32:51.995302 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:32:52.002411 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:32:52.027908 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:32:52.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:52.028207 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:32:52.033800 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:32:52.033861 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:32:52.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:52.035501 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:32:52.035614 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:32:52.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:52.043359 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:32:52.043414 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:32:52.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:52.055249 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:32:52.055308 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:32:52.065103 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:32:52.065199 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:32:52.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:52.078135 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:32:52.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:52.084285 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 01:32:52.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:52.084344 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:32:52.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:52.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:52.091366 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:32:52.091431 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:32:52.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:52.101440 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:32:52.101505 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:32:52.110343 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:32:52.110460 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:32:52.126086 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:32:52.126312 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:32:52.128779 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:32:52.134728 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:32:52.185483 systemd[1]: Switching root. Jan 20 01:32:52.223954 systemd-journald[317]: Journal stopped Jan 20 01:32:53.701763 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Jan 20 01:32:53.701847 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:32:53.701874 kernel: SELinux: policy capability open_perms=1 Jan 20 01:32:53.701891 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:32:53.701912 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:32:53.701929 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:32:53.701981 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:32:53.702026 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:32:53.702038 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:32:53.702050 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 01:32:53.702062 systemd[1]: Successfully loaded SELinux policy in 80.606ms. Jan 20 01:32:53.702084 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.690ms. Jan 20 01:32:53.702097 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:32:53.702131 systemd[1]: Detected virtualization kvm. Jan 20 01:32:53.702183 systemd[1]: Detected architecture x86-64. Jan 20 01:32:53.702203 systemd[1]: Detected first boot. Jan 20 01:32:53.702220 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 01:32:53.702239 zram_generator::config[1151]: No configuration found. Jan 20 01:32:53.702274 kernel: Guest personality initialized and is inactive Jan 20 01:32:53.702329 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 01:32:53.702373 kernel: Initialized host personality Jan 20 01:32:53.702389 kernel: NET: Registered PF_VSOCK protocol family Jan 20 01:32:53.702407 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:32:53.702430 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:32:53.702442 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:32:53.702454 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:32:53.702470 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:32:53.702504 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:32:53.702608 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:32:53.702622 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:32:53.702641 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:32:53.702663 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:32:53.702681 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:32:53.702811 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:32:53.702862 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:32:53.702884 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:32:53.702902 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:32:53.702945 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:32:53.702958 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:32:53.702971 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:32:53.703003 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 01:32:53.703028 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:32:53.703045 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:32:53.703101 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:32:53.703122 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:32:53.703173 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:32:53.703196 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:32:53.703213 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:32:53.703229 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:32:53.703249 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 01:32:53.703300 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:32:53.703319 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:32:53.703337 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:32:53.703354 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:32:53.703366 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 01:32:53.703377 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 01:32:53.703389 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 01:32:53.703422 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:32:53.703434 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 01:32:53.703446 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 01:32:53.703458 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:32:53.703475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:32:53.703487 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:32:53.703498 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:32:53.703510 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:32:53.703586 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:32:53.703600 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:32:53.703617 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:32:53.703669 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:32:53.703689 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:32:53.703711 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:32:53.703760 systemd[1]: Reached target machines.target - Containers. Jan 20 01:32:53.703782 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:32:53.703801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:32:53.703819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:32:53.703835 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:32:53.703848 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:32:53.703859 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:32:53.703894 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:32:53.703906 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:32:53.703918 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:32:53.703930 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:32:53.703943 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:32:53.703954 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:32:53.703966 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:32:53.703997 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:32:53.704010 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:32:53.704022 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:32:53.704051 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:32:53.704063 kernel: ACPI: bus type drm_connector registered Jan 20 01:32:53.704075 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:32:53.704087 kernel: fuse: init (API version 7.41) Jan 20 01:32:53.704111 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:32:53.704123 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 01:32:53.704177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:32:53.704215 systemd-journald[1232]: Collecting audit messages is enabled. Jan 20 01:32:53.704263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:32:53.704277 systemd-journald[1232]: Journal started Jan 20 01:32:53.704297 systemd-journald[1232]: Runtime Journal (/run/log/journal/f588fff956fa47abb41e0889fc674905) is 6M, max 48M, 42M free. Jan 20 01:32:53.400000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 01:32:53.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.630000 audit: BPF prog-id=14 op=UNLOAD Jan 20 01:32:53.630000 audit: BPF prog-id=13 op=UNLOAD Jan 20 01:32:53.631000 audit: BPF prog-id=15 op=LOAD Jan 20 01:32:53.632000 audit: BPF prog-id=16 op=LOAD Jan 20 01:32:53.632000 audit: BPF prog-id=17 op=LOAD Jan 20 01:32:53.698000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 01:32:53.698000 audit[1232]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff4f2146c0 a2=4000 a3=0 items=0 ppid=1 pid=1232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:32:53.698000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 01:32:53.180299 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:32:53.205856 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 01:32:53.206510 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:32:53.207031 systemd[1]: systemd-journald.service: Consumed 1.391s CPU time. Jan 20 01:32:53.714931 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:32:53.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.718771 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:32:53.721632 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:32:53.725000 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:32:53.727920 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:32:53.731112 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:32:53.734485 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:32:53.738183 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:32:53.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.742493 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:32:53.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.746830 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:32:53.747195 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:32:53.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.751840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:32:53.752262 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:32:53.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.756041 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:32:53.756394 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:32:53.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.759991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:32:53.760390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:32:53.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.764335 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:32:53.764753 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:32:53.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.768254 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:32:53.768721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:32:53.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.772302 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:32:53.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.776038 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:32:53.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.781488 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:32:53.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.785785 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 01:32:53.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.805677 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:32:53.809801 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 01:32:53.814826 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:32:53.818899 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:32:53.822784 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:32:53.823612 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:32:53.827961 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 01:32:53.832371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:32:53.832496 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 01:32:53.834279 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:32:53.839867 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:32:53.843443 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:32:53.845062 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:32:53.848782 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:32:53.852088 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:32:53.854899 systemd-journald[1232]: Time spent on flushing to /var/log/journal/f588fff956fa47abb41e0889fc674905 is 67.936ms for 1194 entries. Jan 20 01:32:53.854899 systemd-journald[1232]: System Journal (/var/log/journal/f588fff956fa47abb41e0889fc674905) is 8M, max 163.5M, 155.5M free. Jan 20 01:32:53.939788 systemd-journald[1232]: Received client request to flush runtime journal. Jan 20 01:32:53.939876 kernel: loop1: detected capacity change from 0 to 111560 Jan 20 01:32:53.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.861719 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:32:53.871822 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:32:53.879563 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:32:53.885225 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:32:53.894509 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:32:53.902797 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:32:53.907974 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:32:53.914838 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 01:32:53.919979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:32:53.943943 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:32:53.950577 kernel: loop2: detected capacity change from 0 to 50784 Jan 20 01:32:53.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.957454 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:32:53.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:53.963000 audit: BPF prog-id=18 op=LOAD Jan 20 01:32:53.963000 audit: BPF prog-id=19 op=LOAD Jan 20 01:32:53.963000 audit: BPF prog-id=20 op=LOAD Jan 20 01:32:53.966853 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 01:32:53.971000 audit: BPF prog-id=21 op=LOAD Jan 20 01:32:53.974776 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:32:53.981735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:32:53.985777 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 01:32:53.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:54.003106 kernel: loop3: detected capacity change from 0 to 224512 Jan 20 01:32:53.997000 audit: BPF prog-id=22 op=LOAD Jan 20 01:32:53.998000 audit: BPF prog-id=23 op=LOAD Jan 20 01:32:53.998000 audit: BPF prog-id=24 op=LOAD Jan 20 01:32:54.000958 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 01:32:54.005000 audit: BPF prog-id=25 op=LOAD Jan 20 01:32:54.005000 audit: BPF prog-id=26 op=LOAD Jan 20 01:32:54.005000 audit: BPF prog-id=27 op=LOAD Jan 20 01:32:54.007012 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:32:54.034391 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jan 20 01:32:54.034413 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jan 20 01:32:54.042264 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:32:54.049716 kernel: loop4: detected capacity change from 0 to 111560 Jan 20 01:32:54.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:54.352861 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:32:54.354591 kernel: loop5: detected capacity change from 0 to 50784 Jan 20 01:32:54.376855 kernel: loop6: detected capacity change from 0 to 224512 Jan 20 01:32:54.394898 (sd-merge)[1298]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 20 01:32:54.400041 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:32:54.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:54.413860 systemd-nsresourced[1292]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 01:32:54.416304 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 01:32:54.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:54.418773 (sd-merge)[1298]: Merged extensions into '/usr'. Jan 20 01:32:54.429717 systemd[1]: Reload requested from client PID 1271 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:32:54.429880 systemd[1]: Reloading... Jan 20 01:32:54.532993 systemd-oomd[1287]: No swap; memory pressure usage will be degraded Jan 20 01:32:54.538613 zram_generator::config[1340]: No configuration found. Jan 20 01:32:54.598448 systemd-resolved[1289]: Positive Trust Anchors: Jan 20 01:32:54.598478 systemd-resolved[1289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:32:54.598483 systemd-resolved[1289]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 01:32:54.598510 systemd-resolved[1289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:32:54.604075 systemd-resolved[1289]: Defaulting to hostname 'linux'. Jan 20 01:32:54.810498 systemd[1]: Reloading finished in 380 ms. Jan 20 01:32:54.845752 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 01:32:54.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:54.849778 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:32:54.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:54.853560 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:32:54.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:54.862878 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:32:54.889277 systemd[1]: Starting ensure-sysext.service... Jan 20 01:32:54.893576 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:32:54.898000 audit: BPF prog-id=28 op=LOAD Jan 20 01:32:54.904000 audit: BPF prog-id=15 op=UNLOAD Jan 20 01:32:54.904000 audit: BPF prog-id=29 op=LOAD Jan 20 01:32:54.904000 audit: BPF prog-id=30 op=LOAD Jan 20 01:32:54.904000 audit: BPF prog-id=16 op=UNLOAD Jan 20 01:32:54.904000 audit: BPF prog-id=17 op=UNLOAD Jan 20 01:32:54.907000 audit: BPF prog-id=31 op=LOAD Jan 20 01:32:54.907000 audit: BPF prog-id=22 op=UNLOAD Jan 20 01:32:54.908000 audit: BPF prog-id=32 op=LOAD Jan 20 01:32:54.908000 audit: BPF prog-id=33 op=LOAD Jan 20 01:32:54.908000 audit: BPF prog-id=23 op=UNLOAD Jan 20 01:32:54.908000 audit: BPF prog-id=24 op=UNLOAD Jan 20 01:32:54.911000 audit: BPF prog-id=34 op=LOAD Jan 20 01:32:54.911000 audit: BPF prog-id=21 op=UNLOAD Jan 20 01:32:54.912000 audit: BPF prog-id=35 op=LOAD Jan 20 01:32:54.912000 audit: BPF prog-id=18 op=UNLOAD Jan 20 01:32:54.913000 audit: BPF prog-id=36 op=LOAD Jan 20 01:32:54.913000 audit: BPF prog-id=37 op=LOAD Jan 20 01:32:54.913000 audit: BPF prog-id=19 op=UNLOAD Jan 20 01:32:54.913000 audit: BPF prog-id=20 op=UNLOAD Jan 20 01:32:54.914000 audit: BPF prog-id=38 op=LOAD Jan 20 01:32:54.914000 audit: BPF prog-id=25 op=UNLOAD Jan 20 01:32:54.914000 audit: BPF prog-id=39 op=LOAD Jan 20 01:32:54.914000 audit: BPF prog-id=40 op=LOAD Jan 20 01:32:54.914000 audit: BPF prog-id=26 op=UNLOAD Jan 20 01:32:54.914000 audit: BPF prog-id=27 op=UNLOAD Jan 20 01:32:54.924121 systemd[1]: Reload requested from client PID 1376 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:32:54.924193 systemd[1]: Reloading... Jan 20 01:32:54.929471 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:32:54.929603 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:32:54.930134 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:32:54.932981 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jan 20 01:32:54.933108 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jan 20 01:32:54.945728 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:32:54.945758 systemd-tmpfiles[1377]: Skipping /boot Jan 20 01:32:54.963438 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:32:54.963478 systemd-tmpfiles[1377]: Skipping /boot Jan 20 01:32:55.007571 zram_generator::config[1406]: No configuration found. Jan 20 01:32:55.296283 systemd[1]: Reloading finished in 371 ms. Jan 20 01:32:55.322673 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:32:55.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:55.329000 audit: BPF prog-id=41 op=LOAD Jan 20 01:32:55.329000 audit: BPF prog-id=35 op=UNLOAD Jan 20 01:32:55.329000 audit: BPF prog-id=42 op=LOAD Jan 20 01:32:55.330000 audit: BPF prog-id=43 op=LOAD Jan 20 01:32:55.330000 audit: BPF prog-id=36 op=UNLOAD Jan 20 01:32:55.330000 audit: BPF prog-id=37 op=UNLOAD Jan 20 01:32:55.334000 audit: BPF prog-id=44 op=LOAD Jan 20 01:32:55.334000 audit: BPF prog-id=34 op=UNLOAD Jan 20 01:32:55.351000 audit: BPF prog-id=45 op=LOAD Jan 20 01:32:55.351000 audit: BPF prog-id=31 op=UNLOAD Jan 20 01:32:55.351000 audit: BPF prog-id=46 op=LOAD Jan 20 01:32:55.351000 audit: BPF prog-id=47 op=LOAD Jan 20 01:32:55.351000 audit: BPF prog-id=32 op=UNLOAD Jan 20 01:32:55.351000 audit: BPF prog-id=33 op=UNLOAD Jan 20 01:32:55.352000 audit: BPF prog-id=48 op=LOAD Jan 20 01:32:55.352000 audit: BPF prog-id=38 op=UNLOAD Jan 20 01:32:55.353000 audit: BPF prog-id=49 op=LOAD Jan 20 01:32:55.353000 audit: BPF prog-id=50 op=LOAD Jan 20 01:32:55.353000 audit: BPF prog-id=39 op=UNLOAD Jan 20 01:32:55.353000 audit: BPF prog-id=40 op=UNLOAD Jan 20 01:32:55.354000 audit: BPF prog-id=51 op=LOAD Jan 20 01:32:55.354000 audit: BPF prog-id=28 op=UNLOAD Jan 20 01:32:55.354000 audit: BPF prog-id=52 op=LOAD Jan 20 01:32:55.354000 audit: BPF prog-id=53 op=LOAD Jan 20 01:32:55.354000 audit: BPF prog-id=29 op=UNLOAD Jan 20 01:32:55.354000 audit: BPF prog-id=30 op=UNLOAD Jan 20 01:32:55.358773 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:32:55.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:55.374634 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:32:55.378477 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:32:55.397283 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:32:55.404469 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:32:55.412000 audit: BPF prog-id=8 op=UNLOAD Jan 20 01:32:55.412000 audit: BPF prog-id=7 op=UNLOAD Jan 20 01:32:55.413000 audit: BPF prog-id=54 op=LOAD Jan 20 01:32:55.413000 audit: BPF prog-id=55 op=LOAD Jan 20 01:32:55.415686 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:32:55.423850 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:32:55.433014 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:32:55.433306 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:32:55.436636 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:32:55.442607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:32:55.454432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:32:55.457366 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:32:55.457651 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 01:32:55.457748 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:32:55.457828 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:32:55.462000 audit[1460]: SYSTEM_BOOT pid=1460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 01:32:55.464235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:32:55.464623 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:32:55.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:55.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:55.476673 systemd-udevd[1458]: Using default interface naming scheme 'v257'. Jan 20 01:32:55.482843 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:32:55.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:55.492218 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:32:55.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:32:55.499745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:32:55.500054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:32:55.502000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 01:32:55.502000 audit[1477]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc5dc43dd0 a2=420 a3=0 items=0 ppid=1448 pid=1477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:32:55.502000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 01:32:55.504977 augenrules[1477]: No rules Jan 20 01:32:55.505022 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:32:55.505383 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:32:55.510366 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:32:55.510735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:32:55.529407 systemd[1]: Finished ensure-sysext.service. Jan 20 01:32:55.533466 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:32:55.533800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:32:55.535567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:32:55.541067 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:32:55.545569 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:32:55.545741 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 01:32:55.545798 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:32:55.545872 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:32:55.553444 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 01:32:55.557665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:32:55.558090 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:32:55.563104 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:32:55.563721 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:32:55.568318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:32:55.568760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:32:56.107136 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:32:56.158743 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:32:56.160491 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:32:56.191210 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:32:56.721180 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 01:32:56.725596 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 01:32:56.730595 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:32:56.806948 systemd-networkd[1506]: lo: Link UP Jan 20 01:32:56.806960 systemd-networkd[1506]: lo: Gained carrier Jan 20 01:32:56.810787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:32:56.815673 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:32:56.823355 systemd[1]: Reached target network.target - Network. Jan 20 01:32:56.825956 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 01:32:56.825221 systemd-networkd[1506]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:32:56.825229 systemd-networkd[1506]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:32:56.828569 kernel: ACPI: button: Power Button [PWRF] Jan 20 01:32:56.829099 systemd-networkd[1506]: eth0: Link UP Jan 20 01:32:56.829435 systemd-networkd[1506]: eth0: Gained carrier Jan 20 01:32:56.829476 systemd-networkd[1506]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:32:56.831864 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:32:56.837737 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 01:32:56.847716 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:32:56.854614 systemd-networkd[1506]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:32:56.855647 systemd-timesyncd[1495]: Network configuration changed, trying to establish connection. Jan 20 01:32:57.967963 systemd-timesyncd[1495]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 01:32:57.968171 systemd-timesyncd[1495]: Initial clock synchronization to Tue 2026-01-20 01:32:57.967844 UTC. Jan 20 01:32:57.968921 systemd-resolved[1289]: Clock change detected. Flushing caches. Jan 20 01:32:57.994021 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 01:32:58.001561 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:32:58.003152 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:32:58.013181 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 01:32:58.020149 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 01:32:58.020492 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 01:32:58.255264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:32:58.385845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:32:58.386780 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:32:58.624888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:32:58.649598 kernel: kvm_amd: TSC scaling supported Jan 20 01:32:58.649694 kernel: kvm_amd: Nested Virtualization enabled Jan 20 01:32:58.649793 kernel: kvm_amd: Nested Paging enabled Jan 20 01:32:58.649845 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 01:32:58.649866 kernel: kvm_amd: PMU virtualization is disabled Jan 20 01:32:58.752552 kernel: EDAC MC: Ver: 3.0.0 Jan 20 01:32:58.800597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:32:58.822471 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:32:58.854852 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:32:58.906065 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:32:58.975602 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:32:58.980515 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:32:58.984195 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:32:58.988532 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:32:58.993208 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 01:32:58.997576 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:32:59.001492 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:32:59.005644 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 01:32:59.010223 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 01:32:59.013806 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:32:59.017826 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:32:59.017895 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:32:59.020707 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:32:59.025605 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:32:59.031078 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:32:59.037396 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 01:32:59.041868 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 01:32:59.045798 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 01:32:59.051809 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:32:59.055454 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 01:32:59.059889 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:32:59.064136 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:32:59.067134 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:32:59.070059 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:32:59.070175 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:32:59.075277 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:32:59.082058 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:32:59.086448 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:32:59.090059 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:32:59.112553 jq[1567]: false Jan 20 01:32:59.113293 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:32:59.116387 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:32:59.117957 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 01:32:59.122834 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:32:59.128528 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:32:59.139397 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:32:59.146898 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:32:59.151120 extend-filesystems[1568]: Found /dev/vda6 Jan 20 01:32:59.155490 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:32:59.158944 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:32:59.160838 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:32:59.162670 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:32:59.165259 extend-filesystems[1568]: Found /dev/vda9 Jan 20 01:32:59.167666 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:32:59.174497 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:32:59.180046 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:32:59.180559 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:32:59.182061 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:32:59.190368 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:32:59.195257 systemd-networkd[1506]: eth0: Gained IPv6LL Jan 20 01:32:59.204372 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:32:59.206337 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:32:59.212179 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jan 20 01:32:59.216349 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jan 20 01:32:59.218260 extend-filesystems[1568]: Checking size of /dev/vda9 Jan 20 01:32:59.222617 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:32:59.229518 tar[1587]: linux-amd64/LICENSE Jan 20 01:32:59.227278 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:32:59.231854 tar[1587]: linux-amd64/helm Jan 20 01:32:59.238145 jq[1582]: true Jan 20 01:32:59.246442 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 01:32:59.256388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:32:59.271127 update_engine[1580]: I20260120 01:32:59.258338 1580 main.cc:92] Flatcar Update Engine starting Jan 20 01:32:59.269475 oslogin_cache_refresh[1569]: Failure getting users, quitting Jan 20 01:32:59.271989 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Jan 20 01:32:59.271989 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:32:59.271989 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Jan 20 01:32:59.260651 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:32:59.269532 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:32:59.269620 oslogin_cache_refresh[1569]: Refreshing group entry cache Jan 20 01:32:59.292822 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Jan 20 01:32:59.292822 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:32:59.291384 oslogin_cache_refresh[1569]: Failure getting groups, quitting Jan 20 01:32:59.291402 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:32:59.293995 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 01:32:59.295359 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 01:32:59.343003 extend-filesystems[1568]: Resized partition /dev/vda9 Jan 20 01:32:59.350984 dbus-daemon[1565]: [system] SELinux support is enabled Jan 20 01:32:59.375628 update_engine[1580]: I20260120 01:32:59.369660 1580 update_check_scheduler.cc:74] Next update check in 5m50s Jan 20 01:32:59.375701 extend-filesystems[1626]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 01:32:59.387887 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 20 01:32:59.351437 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:32:59.358778 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:32:59.358814 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:32:59.382384 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:32:59.382413 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:32:59.388003 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:32:59.410284 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:32:59.419428 jq[1609]: true Jan 20 01:32:59.446702 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:32:59.493150 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 20 01:32:59.520632 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 01:32:59.521234 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 01:32:59.528698 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:32:59.531810 extend-filesystems[1626]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 01:32:59.531810 extend-filesystems[1626]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 01:32:59.531810 extend-filesystems[1626]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 20 01:32:59.555361 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Jan 20 01:32:59.539869 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:32:59.540333 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:32:59.576521 systemd-logind[1578]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 01:32:59.584246 systemd-logind[1578]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 01:32:59.584785 locksmithd[1631]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:32:59.587144 bash[1657]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:32:59.586983 systemd-logind[1578]: New seat seat0. Jan 20 01:32:59.591256 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:32:59.599671 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:32:59.604172 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:32:59.607911 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 01:32:59.644696 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:32:59.653549 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:32:59.671967 containerd[1611]: time="2026-01-20T01:32:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 01:32:59.671967 containerd[1611]: time="2026-01-20T01:32:59.671469802Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 01:32:59.681061 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:32:59.681565 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:32:59.688840 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:32:59.699511 containerd[1611]: time="2026-01-20T01:32:59.698918723Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.51µs" Jan 20 01:32:59.699511 containerd[1611]: time="2026-01-20T01:32:59.698981810Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 01:32:59.699511 containerd[1611]: time="2026-01-20T01:32:59.699027425Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 01:32:59.699511 containerd[1611]: time="2026-01-20T01:32:59.699043796Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 01:32:59.699511 containerd[1611]: time="2026-01-20T01:32:59.699286198Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 01:32:59.699511 containerd[1611]: time="2026-01-20T01:32:59.699305855Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:32:59.699511 containerd[1611]: time="2026-01-20T01:32:59.699379653Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:32:59.699511 containerd[1611]: time="2026-01-20T01:32:59.699393339Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:32:59.699892 containerd[1611]: time="2026-01-20T01:32:59.699664645Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:32:59.699892 containerd[1611]: time="2026-01-20T01:32:59.699684312Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:32:59.699892 containerd[1611]: time="2026-01-20T01:32:59.699698368Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:32:59.699892 containerd[1611]: time="2026-01-20T01:32:59.699709629Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 01:32:59.700061 containerd[1611]: time="2026-01-20T01:32:59.699996805Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 01:32:59.700061 containerd[1611]: time="2026-01-20T01:32:59.700016853Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 01:32:59.701698 containerd[1611]: time="2026-01-20T01:32:59.700872200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 01:32:59.701698 containerd[1611]: time="2026-01-20T01:32:59.701217616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:32:59.701698 containerd[1611]: time="2026-01-20T01:32:59.701255416Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:32:59.701698 containerd[1611]: time="2026-01-20T01:32:59.701268340Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 01:32:59.701698 containerd[1611]: time="2026-01-20T01:32:59.701332459Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 01:32:59.701698 containerd[1611]: time="2026-01-20T01:32:59.701597234Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 01:32:59.701698 containerd[1611]: time="2026-01-20T01:32:59.701684828Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:32:59.709075 containerd[1611]: time="2026-01-20T01:32:59.708998760Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 01:32:59.709315 containerd[1611]: time="2026-01-20T01:32:59.709251311Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 01:32:59.709876 containerd[1611]: time="2026-01-20T01:32:59.709681805Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 01:32:59.710042 containerd[1611]: time="2026-01-20T01:32:59.710022571Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 01:32:59.710274 containerd[1611]: time="2026-01-20T01:32:59.710194503Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 01:32:59.710353 containerd[1611]: time="2026-01-20T01:32:59.710221102Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710447224Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710466651Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710483442Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710500364Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710517205Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710532483Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710544726Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710564092Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710708762Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710778022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710798129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710812887Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710826092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 01:32:59.711012 containerd[1611]: time="2026-01-20T01:32:59.710841481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 01:32:59.711556 containerd[1611]: time="2026-01-20T01:32:59.710855657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 01:32:59.711556 containerd[1611]: time="2026-01-20T01:32:59.710869292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 01:32:59.711556 containerd[1611]: time="2026-01-20T01:32:59.710884792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 01:32:59.711556 containerd[1611]: time="2026-01-20T01:32:59.710898366Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 01:32:59.711556 containerd[1611]: time="2026-01-20T01:32:59.710912272Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 01:32:59.711863 containerd[1611]: time="2026-01-20T01:32:59.711840436Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 01:32:59.712221 containerd[1611]: time="2026-01-20T01:32:59.712200237Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 01:32:59.712409 containerd[1611]: time="2026-01-20T01:32:59.712390433Z" level=info msg="Start snapshots syncer" Jan 20 01:32:59.712659 containerd[1611]: time="2026-01-20T01:32:59.712596537Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 01:32:59.713821 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:32:59.714147 containerd[1611]: time="2026-01-20T01:32:59.713756243Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 01:32:59.714526 containerd[1611]: time="2026-01-20T01:32:59.714447033Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 01:32:59.714866 containerd[1611]: time="2026-01-20T01:32:59.714845227Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 01:32:59.715232 containerd[1611]: time="2026-01-20T01:32:59.715166427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 01:32:59.715405 containerd[1611]: time="2026-01-20T01:32:59.715387058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 01:32:59.715582 containerd[1611]: time="2026-01-20T01:32:59.715522231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 01:32:59.715666 containerd[1611]: time="2026-01-20T01:32:59.715651022Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 01:32:59.715936 containerd[1611]: time="2026-01-20T01:32:59.715784060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 01:32:59.715936 containerd[1611]: time="2026-01-20T01:32:59.715802214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 01:32:59.715936 containerd[1611]: time="2026-01-20T01:32:59.715815548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 01:32:59.715936 containerd[1611]: time="2026-01-20T01:32:59.715828653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 01:32:59.715936 containerd[1611]: time="2026-01-20T01:32:59.715841367Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 01:32:59.715936 containerd[1611]: time="2026-01-20T01:32:59.715879568Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:32:59.716258 containerd[1611]: time="2026-01-20T01:32:59.716173006Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:32:59.716258 containerd[1611]: time="2026-01-20T01:32:59.716195899Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:32:59.716258 containerd[1611]: time="2026-01-20T01:32:59.716210737Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:32:59.716529 containerd[1611]: time="2026-01-20T01:32:59.716448270Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 01:32:59.716529 containerd[1611]: time="2026-01-20T01:32:59.716482775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 01:32:59.716641 containerd[1611]: time="2026-01-20T01:32:59.716625892Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 01:32:59.716858 containerd[1611]: time="2026-01-20T01:32:59.716772235Z" level=info msg="runtime interface created" Jan 20 01:32:59.716858 containerd[1611]: time="2026-01-20T01:32:59.716786893Z" level=info msg="created NRI interface" Jan 20 01:32:59.716858 containerd[1611]: time="2026-01-20T01:32:59.716798775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 01:32:59.716858 containerd[1611]: time="2026-01-20T01:32:59.716813022Z" level=info msg="Connect containerd service" Jan 20 01:32:59.716858 containerd[1611]: time="2026-01-20T01:32:59.716835464Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:32:59.717936 containerd[1611]: time="2026-01-20T01:32:59.717909899Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:32:59.723622 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:32:59.730027 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 01:32:59.733928 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:32:59.838294 containerd[1611]: time="2026-01-20T01:32:59.838183584Z" level=info msg="Start subscribing containerd event" Jan 20 01:32:59.838583 containerd[1611]: time="2026-01-20T01:32:59.838544859Z" level=info msg="Start recovering state" Jan 20 01:32:59.838811 containerd[1611]: time="2026-01-20T01:32:59.838594880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:32:59.839055 containerd[1611]: time="2026-01-20T01:32:59.838951298Z" level=info msg="Start event monitor" Jan 20 01:32:59.839143 containerd[1611]: time="2026-01-20T01:32:59.839069498Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:32:59.839314 containerd[1611]: time="2026-01-20T01:32:59.839234767Z" level=info msg="Start streaming server" Jan 20 01:32:59.839475 containerd[1611]: time="2026-01-20T01:32:59.839385980Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 01:32:59.839475 containerd[1611]: time="2026-01-20T01:32:59.839045612Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:32:59.839521 containerd[1611]: time="2026-01-20T01:32:59.839479595Z" level=info msg="runtime interface starting up..." Jan 20 01:32:59.839521 containerd[1611]: time="2026-01-20T01:32:59.839494452Z" level=info msg="starting plugins..." Jan 20 01:32:59.839621 containerd[1611]: time="2026-01-20T01:32:59.839517886Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 01:32:59.840163 containerd[1611]: time="2026-01-20T01:32:59.840067552Z" level=info msg="containerd successfully booted in 0.170288s" Jan 20 01:32:59.840394 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:32:59.877841 tar[1587]: linux-amd64/README.md Jan 20 01:32:59.903557 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:33:00.846336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:33:00.849956 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:33:00.854267 systemd[1]: Startup finished in 3.966s (kernel) + 12.201s (initrd) + 7.385s (userspace) = 23.553s. Jan 20 01:33:00.860644 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:33:02.196163 kubelet[1707]: E0120 01:33:02.195866 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:33:02.199407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:33:02.199697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:33:02.200653 systemd[1]: kubelet.service: Consumed 2.212s CPU time, 264.8M memory peak. Jan 20 01:33:08.713682 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:33:08.722293 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:40346.service - OpenSSH per-connection server daemon (10.0.0.1:40346). Jan 20 01:33:08.865244 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 40346 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:33:08.868539 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:08.879592 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:33:08.881033 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:33:08.888182 systemd-logind[1578]: New session 1 of user core. Jan 20 01:33:08.915538 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:33:08.919200 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:33:08.944645 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:08.949228 systemd-logind[1578]: New session 2 of user core. Jan 20 01:33:09.158000 systemd[1726]: Queued start job for default target default.target. Jan 20 01:33:09.178303 systemd[1726]: Created slice app.slice - User Application Slice. Jan 20 01:33:09.178375 systemd[1726]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 01:33:09.178395 systemd[1726]: Reached target paths.target - Paths. Jan 20 01:33:09.178896 systemd[1726]: Reached target timers.target - Timers. Jan 20 01:33:09.182579 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:33:09.184591 systemd[1726]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 01:33:09.204585 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:33:09.204785 systemd[1726]: Reached target sockets.target - Sockets. Jan 20 01:33:09.210677 systemd[1726]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 01:33:09.210900 systemd[1726]: Reached target basic.target - Basic System. Jan 20 01:33:09.211008 systemd[1726]: Reached target default.target - Main User Target. Jan 20 01:33:09.211137 systemd[1726]: Startup finished in 253ms. Jan 20 01:33:09.211330 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:33:09.213652 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:33:09.242893 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:40356.service - OpenSSH per-connection server daemon (10.0.0.1:40356). Jan 20 01:33:09.570119 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 40356 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:33:09.572656 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:09.581225 systemd-logind[1578]: New session 3 of user core. Jan 20 01:33:09.591613 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:33:09.614275 sshd[1744]: Connection closed by 10.0.0.1 port 40356 Jan 20 01:33:09.614667 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:09.625645 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:40356.service: Deactivated successfully. Jan 20 01:33:09.628075 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 01:33:09.629572 systemd-logind[1578]: Session 3 logged out. Waiting for processes to exit. Jan 20 01:33:09.632969 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:40368.service - OpenSSH per-connection server daemon (10.0.0.1:40368). Jan 20 01:33:09.633921 systemd-logind[1578]: Removed session 3. Jan 20 01:33:09.706532 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 40368 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:33:09.708583 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:09.715143 systemd-logind[1578]: New session 4 of user core. Jan 20 01:33:09.729329 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:33:09.742668 sshd[1755]: Connection closed by 10.0.0.1 port 40368 Jan 20 01:33:09.743270 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:09.752645 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:40368.service: Deactivated successfully. Jan 20 01:33:09.754843 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 01:33:09.756008 systemd-logind[1578]: Session 4 logged out. Waiting for processes to exit. Jan 20 01:33:09.758766 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:40378.service - OpenSSH per-connection server daemon (10.0.0.1:40378). Jan 20 01:33:09.759440 systemd-logind[1578]: Removed session 4. Jan 20 01:33:09.837979 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 40378 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:33:09.846477 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:09.856007 systemd-logind[1578]: New session 5 of user core. Jan 20 01:33:09.869609 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:33:09.890292 sshd[1765]: Connection closed by 10.0.0.1 port 40378 Jan 20 01:33:09.890771 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:09.902895 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:40378.service: Deactivated successfully. Jan 20 01:33:09.905388 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 01:33:09.906486 systemd-logind[1578]: Session 5 logged out. Waiting for processes to exit. Jan 20 01:33:09.909880 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:40392.service - OpenSSH per-connection server daemon (10.0.0.1:40392). Jan 20 01:33:09.910683 systemd-logind[1578]: Removed session 5. Jan 20 01:33:09.979061 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 40392 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:33:09.981130 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:09.987016 systemd-logind[1578]: New session 6 of user core. Jan 20 01:33:09.997300 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:33:10.021427 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 01:33:10.021845 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:33:10.032124 sudo[1777]: pam_unix(sudo:session): session closed for user root Jan 20 01:33:10.033776 sshd[1776]: Connection closed by 10.0.0.1 port 40392 Jan 20 01:33:10.034216 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:10.052374 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:40392.service: Deactivated successfully. Jan 20 01:33:10.054635 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:33:10.056003 systemd-logind[1578]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:33:10.059821 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:40394.service - OpenSSH per-connection server daemon (10.0.0.1:40394). Jan 20 01:33:10.060968 systemd-logind[1578]: Removed session 6. Jan 20 01:33:10.126274 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 40394 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:33:10.128605 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:10.135260 systemd-logind[1578]: New session 7 of user core. Jan 20 01:33:10.149373 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:33:10.168565 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 01:33:10.169073 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:33:10.177722 sudo[1790]: pam_unix(sudo:session): session closed for user root Jan 20 01:33:10.187901 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 01:33:10.188344 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:33:10.198867 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:33:10.258000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 01:33:10.260059 augenrules[1814]: No rules Jan 20 01:33:10.261068 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:33:10.261510 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:33:10.266133 kernel: kauditd_printk_skb: 148 callbacks suppressed Jan 20 01:33:10.266206 kernel: audit: type=1305 audit(1768872790.258:210): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 01:33:10.262680 sudo[1789]: pam_unix(sudo:session): session closed for user root Jan 20 01:33:10.258000 audit[1814]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff049ba360 a2=420 a3=0 items=0 ppid=1795 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:10.268262 sshd[1788]: Connection closed by 10.0.0.1 port 40394 Jan 20 01:33:10.268590 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:10.274287 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:40394.service: Deactivated successfully. Jan 20 01:33:10.276587 kernel: audit: type=1300 audit(1768872790.258:210): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff049ba360 a2=420 a3=0 items=0 ppid=1795 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:10.276531 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:33:10.258000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 01:33:10.277647 systemd-logind[1578]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:33:10.280401 kernel: audit: type=1327 audit(1768872790.258:210): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 01:33:10.280435 kernel: audit: type=1130 audit(1768872790.260:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.286176 kernel: audit: type=1131 audit(1768872790.260:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.291985 kernel: audit: type=1106 audit(1768872790.261:213): pid=1789 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.261000 audit[1789]: USER_END pid=1789 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.298700 kernel: audit: type=1104 audit(1768872790.261:214): pid=1789 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.261000 audit[1789]: CRED_DISP pid=1789 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.304964 kernel: audit: type=1106 audit(1768872790.269:215): pid=1784 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:10.269000 audit[1784]: USER_END pid=1784 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:10.314413 kernel: audit: type=1104 audit(1768872790.269:216): pid=1784 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:10.269000 audit[1784]: CRED_DISP pid=1784 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:10.321193 kernel: audit: type=1131 audit(1768872790.273:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.144:22-10.0.0.1:40394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.144:22-10.0.0.1:40394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.336298 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:40410.service - OpenSSH per-connection server daemon (10.0.0.1:40410). Jan 20 01:33:10.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.144:22-10.0.0.1:40410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.337054 systemd-logind[1578]: Removed session 7. Jan 20 01:33:10.406000 audit[1823]: USER_ACCT pid=1823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:10.407957 sshd[1823]: Accepted publickey for core from 10.0.0.1 port 40410 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:33:10.407000 audit[1823]: CRED_ACQ pid=1823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:10.408000 audit[1823]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbdd70a90 a2=3 a3=0 items=0 ppid=1 pid=1823 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:10.408000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:33:10.410178 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:10.416513 systemd-logind[1578]: New session 8 of user core. Jan 20 01:33:10.427387 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:33:10.429000 audit[1823]: USER_START pid=1823 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:10.431000 audit[1827]: CRED_ACQ pid=1827 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:10.445000 audit[1828]: USER_ACCT pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.446517 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:33:10.445000 audit[1828]: CRED_REFR pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:33:10.447010 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:33:10.445000 audit[1828]: USER_START pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:33:12.453926 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:33:12.470305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:33:13.202243 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2847379924 wd_nsec: 2847378872 Jan 20 01:33:13.928877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:33:13.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:13.944502 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:33:14.036869 kubelet[1856]: E0120 01:33:14.036773 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:33:14.041512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:33:14.041834 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:33:14.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:33:14.042488 systemd[1]: kubelet.service: Consumed 1.263s CPU time, 113.8M memory peak. Jan 20 01:33:14.196182 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:33:14.213602 (dockerd)[1867]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:33:14.536289 dockerd[1867]: time="2026-01-20T01:33:14.536064256Z" level=info msg="Starting up" Jan 20 01:33:14.537071 dockerd[1867]: time="2026-01-20T01:33:14.537005329Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 01:33:14.552916 dockerd[1867]: time="2026-01-20T01:33:14.552834006Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 01:33:14.642406 systemd[1]: var-lib-docker-metacopy\x2dcheck1498270905-merged.mount: Deactivated successfully. Jan 20 01:33:14.670137 dockerd[1867]: time="2026-01-20T01:33:14.670019508Z" level=info msg="Loading containers: start." Jan 20 01:33:14.684139 kernel: Initializing XFRM netlink socket Jan 20 01:33:14.765000 audit[1920]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.765000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc60bd02e0 a2=0 a3=0 items=0 ppid=1867 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.765000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 01:33:14.768000 audit[1922]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.768000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcc5a58a60 a2=0 a3=0 items=0 ppid=1867 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.768000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 01:33:14.772000 audit[1924]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.772000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda2c5def0 a2=0 a3=0 items=0 ppid=1867 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.772000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 01:33:14.775000 audit[1926]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.775000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc84eabff0 a2=0 a3=0 items=0 ppid=1867 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 01:33:14.779000 audit[1928]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.779000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffe62af450 a2=0 a3=0 items=0 ppid=1867 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 01:33:14.782000 audit[1930]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.782000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffda1254ab0 a2=0 a3=0 items=0 ppid=1867 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.782000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:33:14.786000 audit[1932]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.786000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff8c66c610 a2=0 a3=0 items=0 ppid=1867 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.786000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 01:33:14.790000 audit[1934]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.790000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff28081cd0 a2=0 a3=0 items=0 ppid=1867 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.790000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 01:33:14.825000 audit[1937]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.825000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffee2d76a30 a2=0 a3=0 items=0 ppid=1867 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.825000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 01:33:14.829000 audit[1939]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.829000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffda10094a0 a2=0 a3=0 items=0 ppid=1867 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 01:33:14.832000 audit[1941]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.832000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd36c971b0 a2=0 a3=0 items=0 ppid=1867 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.832000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 01:33:14.835000 audit[1943]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.835000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdeda9fd30 a2=0 a3=0 items=0 ppid=1867 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.835000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:33:14.838000 audit[1945]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.838000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe78a77aa0 a2=0 a3=0 items=0 ppid=1867 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.838000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 01:33:14.894000 audit[1975]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.894000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd02a09650 a2=0 a3=0 items=0 ppid=1867 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.894000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 01:33:14.897000 audit[1977]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.897000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe0fbd2e50 a2=0 a3=0 items=0 ppid=1867 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.897000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 01:33:14.901000 audit[1979]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.901000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3f625f00 a2=0 a3=0 items=0 ppid=1867 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.901000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 01:33:14.904000 audit[1981]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.904000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1a06a920 a2=0 a3=0 items=0 ppid=1867 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.904000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 01:33:14.908000 audit[1983]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.908000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce7774c00 a2=0 a3=0 items=0 ppid=1867 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 01:33:14.911000 audit[1985]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.911000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc222a1b00 a2=0 a3=0 items=0 ppid=1867 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:33:14.914000 audit[1987]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.914000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd2c310070 a2=0 a3=0 items=0 ppid=1867 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 01:33:14.918000 audit[1989]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.918000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe824ffcf0 a2=0 a3=0 items=0 ppid=1867 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 01:33:14.922000 audit[1991]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.922000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff7f19de10 a2=0 a3=0 items=0 ppid=1867 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.922000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 01:33:14.926000 audit[1993]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.926000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd434605f0 a2=0 a3=0 items=0 ppid=1867 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.926000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 01:33:14.930000 audit[1995]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.930000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffeecfd3920 a2=0 a3=0 items=0 ppid=1867 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.930000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 01:33:14.933000 audit[1997]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.933000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc2dc4dea0 a2=0 a3=0 items=0 ppid=1867 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.933000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:33:14.937000 audit[1999]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.937000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd16b5b310 a2=0 a3=0 items=0 ppid=1867 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 01:33:14.946000 audit[2004]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.946000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeff9724a0 a2=0 a3=0 items=0 ppid=1867 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.946000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 01:33:14.950000 audit[2006]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.950000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd2aa508f0 a2=0 a3=0 items=0 ppid=1867 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 01:33:14.954000 audit[2008]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.954000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff373817d0 a2=0 a3=0 items=0 ppid=1867 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 01:33:14.957000 audit[2010]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.957000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe75fcf850 a2=0 a3=0 items=0 ppid=1867 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.957000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 01:33:14.961000 audit[2012]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.961000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffff4737420 a2=0 a3=0 items=0 ppid=1867 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.961000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 01:33:14.965000 audit[2014]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:14.965000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd5717a170 a2=0 a3=0 items=0 ppid=1867 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 01:33:14.986000 audit[2019]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.986000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffea1478360 a2=0 a3=0 items=0 ppid=1867 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.986000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 01:33:14.990000 audit[2021]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:14.990000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd5a86be80 a2=0 a3=0 items=0 ppid=1867 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:14.990000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 01:33:15.007000 audit[2029]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:15.007000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffe48381da0 a2=0 a3=0 items=0 ppid=1867 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:15.007000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 01:33:15.022000 audit[2035]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:15.022000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdb3de0930 a2=0 a3=0 items=0 ppid=1867 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:15.022000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 01:33:15.027000 audit[2037]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:15.027000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffdf04bae60 a2=0 a3=0 items=0 ppid=1867 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:15.027000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 01:33:15.031000 audit[2039]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:15.031000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc5686ab70 a2=0 a3=0 items=0 ppid=1867 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:15.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 01:33:15.035000 audit[2041]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:15.035000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffec4d9d010 a2=0 a3=0 items=0 ppid=1867 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:15.035000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 01:33:15.038000 audit[2043]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:15.038000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffecdc21500 a2=0 a3=0 items=0 ppid=1867 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:15.038000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 01:33:15.041574 systemd-networkd[1506]: docker0: Link UP Jan 20 01:33:15.048109 dockerd[1867]: time="2026-01-20T01:33:15.048008722Z" level=info msg="Loading containers: done." Jan 20 01:33:15.066218 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2058049346-merged.mount: Deactivated successfully. Jan 20 01:33:15.073119 dockerd[1867]: time="2026-01-20T01:33:15.072891917Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:33:15.073191 dockerd[1867]: time="2026-01-20T01:33:15.073144339Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 01:33:15.073438 dockerd[1867]: time="2026-01-20T01:33:15.073340325Z" level=info msg="Initializing buildkit" Jan 20 01:33:15.109988 dockerd[1867]: time="2026-01-20T01:33:15.109958692Z" level=info msg="Completed buildkit initialization" Jan 20 01:33:15.113938 dockerd[1867]: time="2026-01-20T01:33:15.113848726Z" level=info msg="Daemon has completed initialization" Jan 20 01:33:15.114067 dockerd[1867]: time="2026-01-20T01:33:15.114021858Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:33:15.114190 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:33:15.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:15.906474 containerd[1611]: time="2026-01-20T01:33:15.906414623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 01:33:16.620035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1570723039.mount: Deactivated successfully. Jan 20 01:33:17.465459 containerd[1611]: time="2026-01-20T01:33:17.465346692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:17.466240 containerd[1611]: time="2026-01-20T01:33:17.466211321Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=28365456" Jan 20 01:33:17.467417 containerd[1611]: time="2026-01-20T01:33:17.467373978Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:17.470309 containerd[1611]: time="2026-01-20T01:33:17.470249998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:17.471333 containerd[1611]: time="2026-01-20T01:33:17.471242772Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.564782204s" Jan 20 01:33:17.471333 containerd[1611]: time="2026-01-20T01:33:17.471325507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 01:33:17.472189 containerd[1611]: time="2026-01-20T01:33:17.471921174Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 01:33:18.717236 containerd[1611]: time="2026-01-20T01:33:18.717053548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:18.718386 containerd[1611]: time="2026-01-20T01:33:18.718324511Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 20 01:33:18.719498 containerd[1611]: time="2026-01-20T01:33:18.719468427Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:18.722302 containerd[1611]: time="2026-01-20T01:33:18.722252305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:18.723224 containerd[1611]: time="2026-01-20T01:33:18.723190839Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.251243226s" Jan 20 01:33:18.723268 containerd[1611]: time="2026-01-20T01:33:18.723228138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 01:33:18.723808 containerd[1611]: time="2026-01-20T01:33:18.723726509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 01:33:19.868136 containerd[1611]: time="2026-01-20T01:33:19.867977443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:19.869167 containerd[1611]: time="2026-01-20T01:33:19.869112552Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 20 01:33:19.870546 containerd[1611]: time="2026-01-20T01:33:19.870502878Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:19.873480 containerd[1611]: time="2026-01-20T01:33:19.873412021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:19.874184 containerd[1611]: time="2026-01-20T01:33:19.874066499Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.150263238s" Jan 20 01:33:19.874235 containerd[1611]: time="2026-01-20T01:33:19.874186694Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 01:33:19.874871 containerd[1611]: time="2026-01-20T01:33:19.874632808Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 01:33:21.459369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2764293838.mount: Deactivated successfully. Jan 20 01:33:22.408302 containerd[1611]: time="2026-01-20T01:33:22.408001754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:22.409639 containerd[1611]: time="2026-01-20T01:33:22.409026582Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 20 01:33:22.410326 containerd[1611]: time="2026-01-20T01:33:22.410238951Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:22.412253 containerd[1611]: time="2026-01-20T01:33:22.412185180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:22.412987 containerd[1611]: time="2026-01-20T01:33:22.412923985Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.538262474s" Jan 20 01:33:22.412987 containerd[1611]: time="2026-01-20T01:33:22.412974189Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 01:33:22.414793 containerd[1611]: time="2026-01-20T01:33:22.414484023Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 01:33:23.106530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676975761.mount: Deactivated successfully. Jan 20 01:33:24.189644 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:33:24.192945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:33:24.414663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:33:24.419166 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 20 01:33:24.419266 kernel: audit: type=1130 audit(1768872804.413:270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:24.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:24.419602 containerd[1611]: time="2026-01-20T01:33:24.419530799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:24.421384 containerd[1611]: time="2026-01-20T01:33:24.421328468Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17572350" Jan 20 01:33:24.424205 containerd[1611]: time="2026-01-20T01:33:24.424144528Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:24.427543 containerd[1611]: time="2026-01-20T01:33:24.427432135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:24.429287 containerd[1611]: time="2026-01-20T01:33:24.429225231Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.014704229s" Jan 20 01:33:24.429287 containerd[1611]: time="2026-01-20T01:33:24.429276477Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 01:33:24.430507 containerd[1611]: time="2026-01-20T01:33:24.430474772Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 01:33:24.431526 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:33:24.523360 kubelet[2221]: E0120 01:33:24.523224 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:33:24.526405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:33:24.526688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:33:24.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:33:24.527375 systemd[1]: kubelet.service: Consumed 276ms CPU time, 110.7M memory peak. Jan 20 01:33:24.534128 kernel: audit: type=1131 audit(1768872804.526:271): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:33:25.532040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2900089395.mount: Deactivated successfully. Jan 20 01:33:25.539366 containerd[1611]: time="2026-01-20T01:33:25.539232585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:33:25.540370 containerd[1611]: time="2026-01-20T01:33:25.540338102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 01:33:25.541808 containerd[1611]: time="2026-01-20T01:33:25.541713959Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:33:25.544731 containerd[1611]: time="2026-01-20T01:33:25.544679997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:33:25.546008 containerd[1611]: time="2026-01-20T01:33:25.545911477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.115402941s" Jan 20 01:33:25.546075 containerd[1611]: time="2026-01-20T01:33:25.546018466Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 01:33:25.546971 containerd[1611]: time="2026-01-20T01:33:25.546931444Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 01:33:26.168129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2443212260.mount: Deactivated successfully. Jan 20 01:33:28.809170 containerd[1611]: time="2026-01-20T01:33:28.808744795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:28.810423 containerd[1611]: time="2026-01-20T01:33:28.809807783Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55859474" Jan 20 01:33:28.811335 containerd[1611]: time="2026-01-20T01:33:28.811284260Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:28.814260 containerd[1611]: time="2026-01-20T01:33:28.814006750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:28.819160 containerd[1611]: time="2026-01-20T01:33:28.815398940Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.268429255s" Jan 20 01:33:28.819160 containerd[1611]: time="2026-01-20T01:33:28.815450206Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 01:33:30.770357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:33:30.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:30.770615 systemd[1]: kubelet.service: Consumed 276ms CPU time, 110.7M memory peak. Jan 20 01:33:30.773182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:33:30.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:30.783619 kernel: audit: type=1130 audit(1768872810.769:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:30.783688 kernel: audit: type=1131 audit(1768872810.769:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:30.808746 systemd[1]: Reload requested from client PID 2319 ('systemctl') (unit session-8.scope)... Jan 20 01:33:30.808805 systemd[1]: Reloading... Jan 20 01:33:30.902179 zram_generator::config[2367]: No configuration found. Jan 20 01:33:31.133031 systemd[1]: Reloading finished in 323 ms. Jan 20 01:33:31.159000 audit: BPF prog-id=61 op=LOAD Jan 20 01:33:31.159000 audit: BPF prog-id=44 op=UNLOAD Jan 20 01:33:31.164956 kernel: audit: type=1334 audit(1768872811.159:274): prog-id=61 op=LOAD Jan 20 01:33:31.165010 kernel: audit: type=1334 audit(1768872811.159:275): prog-id=44 op=UNLOAD Jan 20 01:33:31.165035 kernel: audit: type=1334 audit(1768872811.162:276): prog-id=62 op=LOAD Jan 20 01:33:31.165057 kernel: audit: type=1334 audit(1768872811.162:277): prog-id=56 op=UNLOAD Jan 20 01:33:31.165124 kernel: audit: type=1334 audit(1768872811.163:278): prog-id=63 op=LOAD Jan 20 01:33:31.165146 kernel: audit: type=1334 audit(1768872811.163:279): prog-id=45 op=UNLOAD Jan 20 01:33:31.165164 kernel: audit: type=1334 audit(1768872811.163:280): prog-id=64 op=LOAD Jan 20 01:33:31.165189 kernel: audit: type=1334 audit(1768872811.163:281): prog-id=65 op=LOAD Jan 20 01:33:31.162000 audit: BPF prog-id=62 op=LOAD Jan 20 01:33:31.162000 audit: BPF prog-id=56 op=UNLOAD Jan 20 01:33:31.163000 audit: BPF prog-id=63 op=LOAD Jan 20 01:33:31.163000 audit: BPF prog-id=45 op=UNLOAD Jan 20 01:33:31.163000 audit: BPF prog-id=64 op=LOAD Jan 20 01:33:31.163000 audit: BPF prog-id=65 op=LOAD Jan 20 01:33:31.163000 audit: BPF prog-id=46 op=UNLOAD Jan 20 01:33:31.163000 audit: BPF prog-id=47 op=UNLOAD Jan 20 01:33:31.164000 audit: BPF prog-id=66 op=LOAD Jan 20 01:33:31.164000 audit: BPF prog-id=51 op=UNLOAD Jan 20 01:33:31.164000 audit: BPF prog-id=67 op=LOAD Jan 20 01:33:31.164000 audit: BPF prog-id=68 op=LOAD Jan 20 01:33:31.164000 audit: BPF prog-id=52 op=UNLOAD Jan 20 01:33:31.164000 audit: BPF prog-id=53 op=UNLOAD Jan 20 01:33:31.164000 audit: BPF prog-id=69 op=LOAD Jan 20 01:33:31.164000 audit: BPF prog-id=70 op=LOAD Jan 20 01:33:31.164000 audit: BPF prog-id=54 op=UNLOAD Jan 20 01:33:31.165000 audit: BPF prog-id=55 op=UNLOAD Jan 20 01:33:31.166000 audit: BPF prog-id=71 op=LOAD Jan 20 01:33:31.166000 audit: BPF prog-id=48 op=UNLOAD Jan 20 01:33:31.166000 audit: BPF prog-id=72 op=LOAD Jan 20 01:33:31.166000 audit: BPF prog-id=73 op=LOAD Jan 20 01:33:31.166000 audit: BPF prog-id=49 op=UNLOAD Jan 20 01:33:31.166000 audit: BPF prog-id=50 op=UNLOAD Jan 20 01:33:31.167000 audit: BPF prog-id=74 op=LOAD Jan 20 01:33:31.167000 audit: BPF prog-id=41 op=UNLOAD Jan 20 01:33:31.167000 audit: BPF prog-id=75 op=LOAD Jan 20 01:33:31.167000 audit: BPF prog-id=76 op=LOAD Jan 20 01:33:31.167000 audit: BPF prog-id=42 op=UNLOAD Jan 20 01:33:31.167000 audit: BPF prog-id=43 op=UNLOAD Jan 20 01:33:31.168000 audit: BPF prog-id=77 op=LOAD Jan 20 01:33:31.168000 audit: BPF prog-id=57 op=UNLOAD Jan 20 01:33:31.170000 audit: BPF prog-id=78 op=LOAD Jan 20 01:33:31.170000 audit: BPF prog-id=58 op=UNLOAD Jan 20 01:33:31.170000 audit: BPF prog-id=79 op=LOAD Jan 20 01:33:31.170000 audit: BPF prog-id=80 op=LOAD Jan 20 01:33:31.171000 audit: BPF prog-id=59 op=UNLOAD Jan 20 01:33:31.171000 audit: BPF prog-id=60 op=UNLOAD Jan 20 01:33:31.196228 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 01:33:31.196379 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 01:33:31.196888 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:33:31.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:33:31.197039 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.5M memory peak. Jan 20 01:33:31.199711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:33:31.421991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:33:31.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:31.437515 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:33:31.488132 kubelet[2413]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:33:31.488132 kubelet[2413]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:33:31.488132 kubelet[2413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:33:31.488603 kubelet[2413]: I0120 01:33:31.488283 2413 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:33:31.798343 kubelet[2413]: I0120 01:33:31.797286 2413 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 01:33:31.798343 kubelet[2413]: I0120 01:33:31.797316 2413 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:33:31.798343 kubelet[2413]: I0120 01:33:31.797817 2413 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 01:33:31.818484 kubelet[2413]: E0120 01:33:31.818395 2413 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:33:31.823030 kubelet[2413]: I0120 01:33:31.822985 2413 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:33:31.831152 kubelet[2413]: I0120 01:33:31.829977 2413 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:33:31.836647 kubelet[2413]: I0120 01:33:31.836597 2413 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:33:31.838284 kubelet[2413]: I0120 01:33:31.838226 2413 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:33:31.838619 kubelet[2413]: I0120 01:33:31.838270 2413 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:33:31.838746 kubelet[2413]: I0120 01:33:31.838647 2413 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:33:31.838746 kubelet[2413]: I0120 01:33:31.838659 2413 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 01:33:31.839052 kubelet[2413]: I0120 01:33:31.839003 2413 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:33:31.842857 kubelet[2413]: I0120 01:33:31.842797 2413 kubelet.go:446] "Attempting to sync node with API server" Jan 20 01:33:31.842907 kubelet[2413]: I0120 01:33:31.842864 2413 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:33:31.842996 kubelet[2413]: I0120 01:33:31.842927 2413 kubelet.go:352] "Adding apiserver pod source" Jan 20 01:33:31.843027 kubelet[2413]: I0120 01:33:31.843012 2413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:33:31.846461 kubelet[2413]: I0120 01:33:31.846412 2413 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 01:33:31.847503 kubelet[2413]: I0120 01:33:31.847353 2413 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 01:33:31.848988 kubelet[2413]: W0120 01:33:31.848872 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 20 01:33:31.849030 kubelet[2413]: E0120 01:33:31.848991 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:33:31.850629 kubelet[2413]: W0120 01:33:31.850551 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 20 01:33:31.850629 kubelet[2413]: W0120 01:33:31.850613 2413 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:33:31.850692 kubelet[2413]: E0120 01:33:31.850627 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:33:31.853708 kubelet[2413]: I0120 01:33:31.853267 2413 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:33:31.853708 kubelet[2413]: I0120 01:33:31.853370 2413 server.go:1287] "Started kubelet" Jan 20 01:33:31.853708 kubelet[2413]: I0120 01:33:31.853555 2413 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:33:31.854229 kubelet[2413]: I0120 01:33:31.853839 2413 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:33:31.854483 kubelet[2413]: I0120 01:33:31.854461 2413 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:33:31.856244 kubelet[2413]: I0120 01:33:31.856219 2413 server.go:479] "Adding debug handlers to kubelet server" Jan 20 01:33:31.857514 kubelet[2413]: I0120 01:33:31.857451 2413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:33:31.858444 kubelet[2413]: I0120 01:33:31.858399 2413 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:33:31.859327 kubelet[2413]: I0120 01:33:31.859295 2413 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:33:31.859471 kubelet[2413]: E0120 01:33:31.859427 2413 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:33:31.862662 kubelet[2413]: E0120 01:33:31.862586 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Jan 20 01:33:31.862925 kubelet[2413]: I0120 01:33:31.862671 2413 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:33:31.862925 kubelet[2413]: I0120 01:33:31.862748 2413 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:33:31.863619 kubelet[2413]: W0120 01:33:31.863430 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 20 01:33:31.863619 kubelet[2413]: E0120 01:33:31.863488 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:33:31.863961 kubelet[2413]: I0120 01:33:31.863943 2413 factory.go:221] Registration of the systemd container factory successfully Jan 20 01:33:31.862000 audit[2426]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:31.862000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe5976d900 a2=0 a3=0 items=0 ppid=2413 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 01:33:31.864436 kubelet[2413]: I0120 01:33:31.864168 2413 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:33:31.864436 kubelet[2413]: E0120 01:33:31.863213 2413 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4c73535d25c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:33:31.853325764 +0000 UTC m=+0.410582194,LastTimestamp:2026-01-20 01:33:31.853325764 +0000 UTC m=+0.410582194,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:33:31.865048 kubelet[2413]: E0120 01:33:31.864979 2413 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:33:31.865676 kubelet[2413]: I0120 01:33:31.865606 2413 factory.go:221] Registration of the containerd container factory successfully Jan 20 01:33:31.865000 audit[2427]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:31.865000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff340303e0 a2=0 a3=0 items=0 ppid=2413 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.865000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 01:33:31.868000 audit[2429]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:31.868000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdd1ac4470 a2=0 a3=0 items=0 ppid=2413 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:33:31.871000 audit[2431]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:31.871000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd9fe946b0 a2=0 a3=0 items=0 ppid=2413 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.871000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:33:31.879000 audit[2434]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:31.879000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffce5154d60 a2=0 a3=0 items=0 ppid=2413 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 20 01:33:31.880867 kubelet[2413]: I0120 01:33:31.880781 2413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 01:33:31.881000 audit[2435]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:31.881000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7fac2ba0 a2=0 a3=0 items=0 ppid=2413 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 01:33:31.882663 kubelet[2413]: I0120 01:33:31.882632 2413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 01:33:31.882754 kubelet[2413]: I0120 01:33:31.882730 2413 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 01:33:31.882896 kubelet[2413]: I0120 01:33:31.882856 2413 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:33:31.882896 kubelet[2413]: I0120 01:33:31.882887 2413 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 01:33:31.883037 kubelet[2413]: E0120 01:33:31.882949 2413 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:33:31.883000 audit[2437]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:31.883000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff762cfc10 a2=0 a3=0 items=0 ppid=2413 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.883000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 01:33:31.885626 kubelet[2413]: W0120 01:33:31.885162 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 20 01:33:31.885626 kubelet[2413]: E0120 01:33:31.885202 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:33:31.886000 audit[2442]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:31.886000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc354250f0 a2=0 a3=0 items=0 ppid=2413 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.886000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 01:33:31.886000 audit[2443]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:31.886000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff67f11ff0 a2=0 a3=0 items=0 ppid=2413 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 01:33:31.888000 audit[2445]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:31.888000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0d5ebff0 a2=0 a3=0 items=0 ppid=2413 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 01:33:31.889000 audit[2446]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:31.889000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdbf86640 a2=0 a3=0 items=0 ppid=2413 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 01:33:31.891009 kubelet[2413]: I0120 01:33:31.890780 2413 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:33:31.891009 kubelet[2413]: I0120 01:33:31.890791 2413 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:33:31.891009 kubelet[2413]: I0120 01:33:31.890830 2413 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:33:31.891000 audit[2447]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:31.891000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff52d26410 a2=0 a3=0 items=0 ppid=2413 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:31.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 01:33:31.959910 kubelet[2413]: E0120 01:33:31.959746 2413 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:33:31.978234 kubelet[2413]: I0120 01:33:31.977969 2413 policy_none.go:49] "None policy: Start" Jan 20 01:33:31.978234 kubelet[2413]: I0120 01:33:31.978061 2413 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:33:31.978234 kubelet[2413]: I0120 01:33:31.978217 2413 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:33:31.983141 kubelet[2413]: E0120 01:33:31.983112 2413 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:33:31.987826 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:33:32.005666 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:33:32.010639 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:33:32.019483 kubelet[2413]: I0120 01:33:32.019415 2413 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 01:33:32.019706 kubelet[2413]: I0120 01:33:32.019652 2413 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:33:32.019835 kubelet[2413]: I0120 01:33:32.019693 2413 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:33:32.020137 kubelet[2413]: I0120 01:33:32.020039 2413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:33:32.021036 kubelet[2413]: E0120 01:33:32.020979 2413 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:33:32.021297 kubelet[2413]: E0120 01:33:32.021239 2413 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:33:32.063671 kubelet[2413]: E0120 01:33:32.063424 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Jan 20 01:33:32.123386 kubelet[2413]: I0120 01:33:32.123284 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:33:32.123915 kubelet[2413]: E0120 01:33:32.123875 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jan 20 01:33:32.193866 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 20 01:33:32.217334 kubelet[2413]: E0120 01:33:32.217281 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:33:32.221722 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 20 01:33:32.241114 kubelet[2413]: E0120 01:33:32.240977 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:33:32.244815 systemd[1]: Created slice kubepods-burstable-pod6c637b139afc147f6d50e8833168857b.slice - libcontainer container kubepods-burstable-pod6c637b139afc147f6d50e8833168857b.slice. Jan 20 01:33:32.247071 kubelet[2413]: E0120 01:33:32.247034 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:33:32.325594 kubelet[2413]: I0120 01:33:32.325383 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:33:32.325862 kubelet[2413]: E0120 01:33:32.325816 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jan 20 01:33:32.364489 kubelet[2413]: I0120 01:33:32.364405 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:32.364489 kubelet[2413]: I0120 01:33:32.364447 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:32.364489 kubelet[2413]: I0120 01:33:32.364466 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:32.364489 kubelet[2413]: I0120 01:33:32.364495 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c637b139afc147f6d50e8833168857b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c637b139afc147f6d50e8833168857b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:33:32.364669 kubelet[2413]: I0120 01:33:32.364511 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:32.364669 kubelet[2413]: I0120 01:33:32.364524 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:32.364669 kubelet[2413]: I0120 01:33:32.364536 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:33:32.364669 kubelet[2413]: I0120 01:33:32.364550 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c637b139afc147f6d50e8833168857b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c637b139afc147f6d50e8833168857b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:33:32.364669 kubelet[2413]: I0120 01:33:32.364561 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c637b139afc147f6d50e8833168857b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c637b139afc147f6d50e8833168857b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:33:32.464991 kubelet[2413]: E0120 01:33:32.464729 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Jan 20 01:33:32.518518 kubelet[2413]: E0120 01:33:32.518424 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:32.519481 containerd[1611]: time="2026-01-20T01:33:32.519424149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 01:33:32.541851 kubelet[2413]: E0120 01:33:32.541800 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:32.542407 containerd[1611]: time="2026-01-20T01:33:32.542336633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 01:33:32.543005 containerd[1611]: time="2026-01-20T01:33:32.542970741Z" level=info msg="connecting to shim 3c0fbbe66b3e4f172e6c1207a1db4803cdaef1ff22527dd86f3bc3845978e757" address="unix:///run/containerd/s/4dd22f08dab307c7c8c26ab10fa616d11dbaec6155110f0814bc0b432601efbc" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:33:32.547927 kubelet[2413]: E0120 01:33:32.547632 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:32.548184 containerd[1611]: time="2026-01-20T01:33:32.548138777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c637b139afc147f6d50e8833168857b,Namespace:kube-system,Attempt:0,}" Jan 20 01:33:32.582812 containerd[1611]: time="2026-01-20T01:33:32.582640903Z" level=info msg="connecting to shim 63c08b336f5be0b93829ff692d2dbd6ffb8313100765c6c4dbecad1d12b6c5d5" address="unix:///run/containerd/s/2decb1cc45fd945312643b6ee6df48e565c0501ec72603441b25f6b1a0b41a26" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:33:32.583389 systemd[1]: Started cri-containerd-3c0fbbe66b3e4f172e6c1207a1db4803cdaef1ff22527dd86f3bc3845978e757.scope - libcontainer container 3c0fbbe66b3e4f172e6c1207a1db4803cdaef1ff22527dd86f3bc3845978e757. Jan 20 01:33:32.603953 containerd[1611]: time="2026-01-20T01:33:32.603902287Z" level=info msg="connecting to shim 762797a4aa1def54995ce1d90cfc77bd8037d3cd923b51780686b6ec2de39eba" address="unix:///run/containerd/s/820e97a7e9dc791456c96a01db91316e469f1e40d792d55f065d61a5aca98788" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:33:32.625000 audit: BPF prog-id=81 op=LOAD Jan 20 01:33:32.625000 audit: BPF prog-id=82 op=LOAD Jan 20 01:33:32.625000 audit[2468]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2456 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363306662626536366233653466313732653663313230376131646234 Jan 20 01:33:32.626000 audit: BPF prog-id=82 op=UNLOAD Jan 20 01:33:32.626000 audit[2468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363306662626536366233653466313732653663313230376131646234 Jan 20 01:33:32.626000 audit: BPF prog-id=83 op=LOAD Jan 20 01:33:32.626000 audit[2468]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2456 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363306662626536366233653466313732653663313230376131646234 Jan 20 01:33:32.626000 audit: BPF prog-id=84 op=LOAD Jan 20 01:33:32.626000 audit[2468]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2456 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363306662626536366233653466313732653663313230376131646234 Jan 20 01:33:32.626000 audit: BPF prog-id=84 op=UNLOAD Jan 20 01:33:32.626000 audit[2468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363306662626536366233653466313732653663313230376131646234 Jan 20 01:33:32.626000 audit: BPF prog-id=83 op=UNLOAD Jan 20 01:33:32.626000 audit[2468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363306662626536366233653466313732653663313230376131646234 Jan 20 01:33:32.626000 audit: BPF prog-id=85 op=LOAD Jan 20 01:33:32.626000 audit[2468]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2456 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363306662626536366233653466313732653663313230376131646234 Jan 20 01:33:32.643494 systemd[1]: Started cri-containerd-63c08b336f5be0b93829ff692d2dbd6ffb8313100765c6c4dbecad1d12b6c5d5.scope - libcontainer container 63c08b336f5be0b93829ff692d2dbd6ffb8313100765c6c4dbecad1d12b6c5d5. Jan 20 01:33:32.645416 systemd[1]: Started cri-containerd-762797a4aa1def54995ce1d90cfc77bd8037d3cd923b51780686b6ec2de39eba.scope - libcontainer container 762797a4aa1def54995ce1d90cfc77bd8037d3cd923b51780686b6ec2de39eba. Jan 20 01:33:32.660000 audit: BPF prog-id=86 op=LOAD Jan 20 01:33:32.660000 audit: BPF prog-id=87 op=LOAD Jan 20 01:33:32.660000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2504 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736323739376134616131646566353439393563653164393063666337 Jan 20 01:33:32.661000 audit: BPF prog-id=87 op=UNLOAD Jan 20 01:33:32.661000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736323739376134616131646566353439393563653164393063666337 Jan 20 01:33:32.661000 audit: BPF prog-id=88 op=LOAD Jan 20 01:33:32.661000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2504 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736323739376134616131646566353439393563653164393063666337 Jan 20 01:33:32.661000 audit: BPF prog-id=89 op=LOAD Jan 20 01:33:32.661000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2504 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736323739376134616131646566353439393563653164393063666337 Jan 20 01:33:32.661000 audit: BPF prog-id=89 op=UNLOAD Jan 20 01:33:32.661000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736323739376134616131646566353439393563653164393063666337 Jan 20 01:33:32.661000 audit: BPF prog-id=88 op=UNLOAD Jan 20 01:33:32.661000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736323739376134616131646566353439393563653164393063666337 Jan 20 01:33:32.662000 audit: BPF prog-id=90 op=LOAD Jan 20 01:33:32.662000 audit: BPF prog-id=91 op=LOAD Jan 20 01:33:32.662000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2504 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736323739376134616131646566353439393563653164393063666337 Jan 20 01:33:32.663000 audit: BPF prog-id=92 op=LOAD Jan 20 01:33:32.663000 audit[2512]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2487 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633038623333366635626530623933383239666636393264326462 Jan 20 01:33:32.663000 audit: BPF prog-id=92 op=UNLOAD Jan 20 01:33:32.663000 audit[2512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633038623333366635626530623933383239666636393264326462 Jan 20 01:33:32.664000 audit: BPF prog-id=93 op=LOAD Jan 20 01:33:32.664000 audit[2512]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2487 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633038623333366635626530623933383239666636393264326462 Jan 20 01:33:32.664000 audit: BPF prog-id=94 op=LOAD Jan 20 01:33:32.664000 audit[2512]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2487 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633038623333366635626530623933383239666636393264326462 Jan 20 01:33:32.664000 audit: BPF prog-id=94 op=UNLOAD Jan 20 01:33:32.664000 audit[2512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633038623333366635626530623933383239666636393264326462 Jan 20 01:33:32.665000 audit: BPF prog-id=93 op=UNLOAD Jan 20 01:33:32.665000 audit[2512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633038623333366635626530623933383239666636393264326462 Jan 20 01:33:32.665000 audit: BPF prog-id=95 op=LOAD Jan 20 01:33:32.665000 audit[2512]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2487 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633038623333366635626530623933383239666636393264326462 Jan 20 01:33:32.683941 kubelet[2413]: W0120 01:33:32.683851 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 20 01:33:32.683941 kubelet[2413]: E0120 01:33:32.683955 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:33:32.686863 containerd[1611]: time="2026-01-20T01:33:32.686799093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c0fbbe66b3e4f172e6c1207a1db4803cdaef1ff22527dd86f3bc3845978e757\"" Jan 20 01:33:32.688818 kubelet[2413]: E0120 01:33:32.688746 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:32.692771 containerd[1611]: time="2026-01-20T01:33:32.692672127Z" level=info msg="CreateContainer within sandbox \"3c0fbbe66b3e4f172e6c1207a1db4803cdaef1ff22527dd86f3bc3845978e757\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:33:32.712140 containerd[1611]: time="2026-01-20T01:33:32.711059949Z" level=info msg="Container bbc9aa25d56375a96dea14de624182f508a7f14609dfb3fb62d4dc3b22280ab2: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:33:32.723667 containerd[1611]: time="2026-01-20T01:33:32.723628576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c637b139afc147f6d50e8833168857b,Namespace:kube-system,Attempt:0,} returns sandbox id \"762797a4aa1def54995ce1d90cfc77bd8037d3cd923b51780686b6ec2de39eba\"" Jan 20 01:33:32.725338 kubelet[2413]: E0120 01:33:32.725296 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:32.727965 containerd[1611]: time="2026-01-20T01:33:32.727924789Z" level=info msg="CreateContainer within sandbox \"762797a4aa1def54995ce1d90cfc77bd8037d3cd923b51780686b6ec2de39eba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:33:32.728338 kubelet[2413]: I0120 01:33:32.728273 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:33:32.728701 kubelet[2413]: E0120 01:33:32.728680 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jan 20 01:33:32.728907 containerd[1611]: time="2026-01-20T01:33:32.728879033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"63c08b336f5be0b93829ff692d2dbd6ffb8313100765c6c4dbecad1d12b6c5d5\"" Jan 20 01:33:32.730300 containerd[1611]: time="2026-01-20T01:33:32.730271098Z" level=info msg="CreateContainer within sandbox \"3c0fbbe66b3e4f172e6c1207a1db4803cdaef1ff22527dd86f3bc3845978e757\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bbc9aa25d56375a96dea14de624182f508a7f14609dfb3fb62d4dc3b22280ab2\"" Jan 20 01:33:32.730916 kubelet[2413]: E0120 01:33:32.730848 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:32.731879 containerd[1611]: time="2026-01-20T01:33:32.731826159Z" level=info msg="StartContainer for \"bbc9aa25d56375a96dea14de624182f508a7f14609dfb3fb62d4dc3b22280ab2\"" Jan 20 01:33:32.733151 containerd[1611]: time="2026-01-20T01:33:32.733052358Z" level=info msg="connecting to shim bbc9aa25d56375a96dea14de624182f508a7f14609dfb3fb62d4dc3b22280ab2" address="unix:///run/containerd/s/4dd22f08dab307c7c8c26ab10fa616d11dbaec6155110f0814bc0b432601efbc" protocol=ttrpc version=3 Jan 20 01:33:32.734249 containerd[1611]: time="2026-01-20T01:33:32.734217783Z" level=info msg="CreateContainer within sandbox \"63c08b336f5be0b93829ff692d2dbd6ffb8313100765c6c4dbecad1d12b6c5d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:33:32.742282 containerd[1611]: time="2026-01-20T01:33:32.742249719Z" level=info msg="Container 14783cfb4261dc5a3f8ab9643c7ce08026bfa99f570e99ce00666e4a55f4b0a8: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:33:32.752723 containerd[1611]: time="2026-01-20T01:33:32.752529128Z" level=info msg="CreateContainer within sandbox \"762797a4aa1def54995ce1d90cfc77bd8037d3cd923b51780686b6ec2de39eba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"14783cfb4261dc5a3f8ab9643c7ce08026bfa99f570e99ce00666e4a55f4b0a8\"" Jan 20 01:33:32.753843 containerd[1611]: time="2026-01-20T01:33:32.753817151Z" level=info msg="StartContainer for \"14783cfb4261dc5a3f8ab9643c7ce08026bfa99f570e99ce00666e4a55f4b0a8\"" Jan 20 01:33:32.756213 containerd[1611]: time="2026-01-20T01:33:32.756181538Z" level=info msg="connecting to shim 14783cfb4261dc5a3f8ab9643c7ce08026bfa99f570e99ce00666e4a55f4b0a8" address="unix:///run/containerd/s/820e97a7e9dc791456c96a01db91316e469f1e40d792d55f065d61a5aca98788" protocol=ttrpc version=3 Jan 20 01:33:32.757564 containerd[1611]: time="2026-01-20T01:33:32.757519462Z" level=info msg="Container 97eeba87924ad7e88dc83c1d294f9526f1fc71b7f7e9a6fe528330f42880fb60: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:33:32.760397 systemd[1]: Started cri-containerd-bbc9aa25d56375a96dea14de624182f508a7f14609dfb3fb62d4dc3b22280ab2.scope - libcontainer container bbc9aa25d56375a96dea14de624182f508a7f14609dfb3fb62d4dc3b22280ab2. Jan 20 01:33:32.771786 containerd[1611]: time="2026-01-20T01:33:32.771637919Z" level=info msg="CreateContainer within sandbox \"63c08b336f5be0b93829ff692d2dbd6ffb8313100765c6c4dbecad1d12b6c5d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"97eeba87924ad7e88dc83c1d294f9526f1fc71b7f7e9a6fe528330f42880fb60\"" Jan 20 01:33:32.772618 containerd[1611]: time="2026-01-20T01:33:32.772538045Z" level=info msg="StartContainer for \"97eeba87924ad7e88dc83c1d294f9526f1fc71b7f7e9a6fe528330f42880fb60\"" Jan 20 01:33:32.773608 containerd[1611]: time="2026-01-20T01:33:32.773562108Z" level=info msg="connecting to shim 97eeba87924ad7e88dc83c1d294f9526f1fc71b7f7e9a6fe528330f42880fb60" address="unix:///run/containerd/s/2decb1cc45fd945312643b6ee6df48e565c0501ec72603441b25f6b1a0b41a26" protocol=ttrpc version=3 Jan 20 01:33:32.781343 systemd[1]: Started cri-containerd-14783cfb4261dc5a3f8ab9643c7ce08026bfa99f570e99ce00666e4a55f4b0a8.scope - libcontainer container 14783cfb4261dc5a3f8ab9643c7ce08026bfa99f570e99ce00666e4a55f4b0a8. Jan 20 01:33:32.783000 audit: BPF prog-id=96 op=LOAD Jan 20 01:33:32.784000 audit: BPF prog-id=97 op=LOAD Jan 20 01:33:32.784000 audit[2583]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2456 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262633961613235643536333735613936646561313464653632343138 Jan 20 01:33:32.784000 audit: BPF prog-id=97 op=UNLOAD Jan 20 01:33:32.784000 audit[2583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262633961613235643536333735613936646561313464653632343138 Jan 20 01:33:32.785000 audit: BPF prog-id=98 op=LOAD Jan 20 01:33:32.785000 audit[2583]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2456 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262633961613235643536333735613936646561313464653632343138 Jan 20 01:33:32.785000 audit: BPF prog-id=99 op=LOAD Jan 20 01:33:32.785000 audit[2583]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2456 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262633961613235643536333735613936646561313464653632343138 Jan 20 01:33:32.785000 audit: BPF prog-id=99 op=UNLOAD Jan 20 01:33:32.785000 audit[2583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262633961613235643536333735613936646561313464653632343138 Jan 20 01:33:32.785000 audit: BPF prog-id=98 op=UNLOAD Jan 20 01:33:32.785000 audit[2583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262633961613235643536333735613936646561313464653632343138 Jan 20 01:33:32.785000 audit: BPF prog-id=100 op=LOAD Jan 20 01:33:32.785000 audit[2583]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2456 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262633961613235643536333735613936646561313464653632343138 Jan 20 01:33:32.807385 kubelet[2413]: W0120 01:33:32.807246 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 20 01:33:32.807385 kubelet[2413]: E0120 01:33:32.807320 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:33:32.810537 systemd[1]: Started cri-containerd-97eeba87924ad7e88dc83c1d294f9526f1fc71b7f7e9a6fe528330f42880fb60.scope - libcontainer container 97eeba87924ad7e88dc83c1d294f9526f1fc71b7f7e9a6fe528330f42880fb60. Jan 20 01:33:32.812000 audit: BPF prog-id=101 op=LOAD Jan 20 01:33:32.814000 audit: BPF prog-id=102 op=LOAD Jan 20 01:33:32.814000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2504 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134373833636662343236316463356133663861623936343363376365 Jan 20 01:33:32.815000 audit: BPF prog-id=102 op=UNLOAD Jan 20 01:33:32.815000 audit[2598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134373833636662343236316463356133663861623936343363376365 Jan 20 01:33:32.815000 audit: BPF prog-id=103 op=LOAD Jan 20 01:33:32.815000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2504 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134373833636662343236316463356133663861623936343363376365 Jan 20 01:33:32.815000 audit: BPF prog-id=104 op=LOAD Jan 20 01:33:32.815000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2504 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134373833636662343236316463356133663861623936343363376365 Jan 20 01:33:32.815000 audit: BPF prog-id=104 op=UNLOAD Jan 20 01:33:32.815000 audit[2598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134373833636662343236316463356133663861623936343363376365 Jan 20 01:33:32.815000 audit: BPF prog-id=103 op=UNLOAD Jan 20 01:33:32.815000 audit[2598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134373833636662343236316463356133663861623936343363376365 Jan 20 01:33:32.815000 audit: BPF prog-id=105 op=LOAD Jan 20 01:33:32.815000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2504 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134373833636662343236316463356133663861623936343363376365 Jan 20 01:33:32.832000 audit: BPF prog-id=106 op=LOAD Jan 20 01:33:32.833000 audit: BPF prog-id=107 op=LOAD Jan 20 01:33:32.833000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2487 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937656562613837393234616437653838646338336331643239346639 Jan 20 01:33:32.833000 audit: BPF prog-id=107 op=UNLOAD Jan 20 01:33:32.833000 audit[2617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937656562613837393234616437653838646338336331643239346639 Jan 20 01:33:32.833000 audit: BPF prog-id=108 op=LOAD Jan 20 01:33:32.833000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2487 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937656562613837393234616437653838646338336331643239346639 Jan 20 01:33:32.833000 audit: BPF prog-id=109 op=LOAD Jan 20 01:33:32.833000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2487 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937656562613837393234616437653838646338336331643239346639 Jan 20 01:33:32.834000 audit: BPF prog-id=109 op=UNLOAD Jan 20 01:33:32.834000 audit[2617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937656562613837393234616437653838646338336331643239346639 Jan 20 01:33:32.834000 audit: BPF prog-id=108 op=UNLOAD Jan 20 01:33:32.834000 audit[2617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937656562613837393234616437653838646338336331643239346639 Jan 20 01:33:32.834000 audit: BPF prog-id=110 op=LOAD Jan 20 01:33:32.834000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2487 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:32.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937656562613837393234616437653838646338336331643239346639 Jan 20 01:33:32.843837 containerd[1611]: time="2026-01-20T01:33:32.843724174Z" level=info msg="StartContainer for \"bbc9aa25d56375a96dea14de624182f508a7f14609dfb3fb62d4dc3b22280ab2\" returns successfully" Jan 20 01:33:32.881039 containerd[1611]: time="2026-01-20T01:33:32.880939693Z" level=info msg="StartContainer for \"14783cfb4261dc5a3f8ab9643c7ce08026bfa99f570e99ce00666e4a55f4b0a8\" returns successfully" Jan 20 01:33:32.890907 containerd[1611]: time="2026-01-20T01:33:32.890862153Z" level=info msg="StartContainer for \"97eeba87924ad7e88dc83c1d294f9526f1fc71b7f7e9a6fe528330f42880fb60\" returns successfully" Jan 20 01:33:32.899383 kubelet[2413]: E0120 01:33:32.899185 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:33:32.899383 kubelet[2413]: E0120 01:33:32.899300 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:32.901146 kubelet[2413]: E0120 01:33:32.901030 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:33:32.902285 kubelet[2413]: E0120 01:33:32.902271 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:32.906076 kubelet[2413]: E0120 01:33:32.906063 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:33:32.906277 kubelet[2413]: E0120 01:33:32.906265 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:33.533544 kubelet[2413]: I0120 01:33:33.533061 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:33:33.911225 kubelet[2413]: E0120 01:33:33.909562 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:33:33.911584 kubelet[2413]: E0120 01:33:33.911560 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:33.912375 kubelet[2413]: E0120 01:33:33.912338 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:33:33.912962 kubelet[2413]: E0120 01:33:33.912892 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:34.292889 kubelet[2413]: E0120 01:33:34.292812 2413 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 01:33:34.483468 kubelet[2413]: I0120 01:33:34.483240 2413 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:33:34.483468 kubelet[2413]: E0120 01:33:34.483271 2413 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:33:34.563887 kubelet[2413]: I0120 01:33:34.563452 2413 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:34.569011 kubelet[2413]: E0120 01:33:34.568944 2413 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:34.569011 kubelet[2413]: I0120 01:33:34.568985 2413 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 01:33:34.570582 kubelet[2413]: E0120 01:33:34.570528 2413 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 01:33:34.570582 kubelet[2413]: I0120 01:33:34.570562 2413 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:33:34.572182 kubelet[2413]: E0120 01:33:34.572141 2413 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 01:33:34.847627 kubelet[2413]: I0120 01:33:34.847454 2413 apiserver.go:52] "Watching apiserver" Jan 20 01:33:34.863223 kubelet[2413]: I0120 01:33:34.863139 2413 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:33:36.879356 systemd[1]: Reload requested from client PID 2689 ('systemctl') (unit session-8.scope)... Jan 20 01:33:36.879395 systemd[1]: Reloading... Jan 20 01:33:36.975159 zram_generator::config[2735]: No configuration found. Jan 20 01:33:37.244802 systemd[1]: Reloading finished in 364 ms. Jan 20 01:33:37.291864 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:33:37.313158 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:33:37.313573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:33:37.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:37.313672 systemd[1]: kubelet.service: Consumed 983ms CPU time, 130.5M memory peak. Jan 20 01:33:37.315312 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 20 01:33:37.315386 kernel: audit: type=1131 audit(1768872817.312:376): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:37.317359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:33:37.314000 audit: BPF prog-id=111 op=LOAD Jan 20 01:33:37.323842 kernel: audit: type=1334 audit(1768872817.314:377): prog-id=111 op=LOAD Jan 20 01:33:37.323904 kernel: audit: type=1334 audit(1768872817.314:378): prog-id=77 op=UNLOAD Jan 20 01:33:37.314000 audit: BPF prog-id=77 op=UNLOAD Jan 20 01:33:37.320000 audit: BPF prog-id=112 op=LOAD Jan 20 01:33:37.326846 kernel: audit: type=1334 audit(1768872817.320:379): prog-id=112 op=LOAD Jan 20 01:33:37.320000 audit: BPF prog-id=61 op=UNLOAD Jan 20 01:33:37.331456 kernel: audit: type=1334 audit(1768872817.320:380): prog-id=61 op=UNLOAD Jan 20 01:33:37.331511 kernel: audit: type=1334 audit(1768872817.322:381): prog-id=113 op=LOAD Jan 20 01:33:37.322000 audit: BPF prog-id=113 op=LOAD Jan 20 01:33:37.333758 kernel: audit: type=1334 audit(1768872817.322:382): prog-id=66 op=UNLOAD Jan 20 01:33:37.322000 audit: BPF prog-id=66 op=UNLOAD Jan 20 01:33:37.336334 kernel: audit: type=1334 audit(1768872817.322:383): prog-id=114 op=LOAD Jan 20 01:33:37.322000 audit: BPF prog-id=114 op=LOAD Jan 20 01:33:37.322000 audit: BPF prog-id=115 op=LOAD Jan 20 01:33:37.340631 kernel: audit: type=1334 audit(1768872817.322:384): prog-id=115 op=LOAD Jan 20 01:33:37.340685 kernel: audit: type=1334 audit(1768872817.322:385): prog-id=67 op=UNLOAD Jan 20 01:33:37.322000 audit: BPF prog-id=67 op=UNLOAD Jan 20 01:33:37.322000 audit: BPF prog-id=68 op=UNLOAD Jan 20 01:33:37.322000 audit: BPF prog-id=116 op=LOAD Jan 20 01:33:37.322000 audit: BPF prog-id=63 op=UNLOAD Jan 20 01:33:37.322000 audit: BPF prog-id=117 op=LOAD Jan 20 01:33:37.322000 audit: BPF prog-id=118 op=LOAD Jan 20 01:33:37.322000 audit: BPF prog-id=64 op=UNLOAD Jan 20 01:33:37.322000 audit: BPF prog-id=65 op=UNLOAD Jan 20 01:33:37.326000 audit: BPF prog-id=119 op=LOAD Jan 20 01:33:37.326000 audit: BPF prog-id=71 op=UNLOAD Jan 20 01:33:37.326000 audit: BPF prog-id=120 op=LOAD Jan 20 01:33:37.326000 audit: BPF prog-id=121 op=LOAD Jan 20 01:33:37.326000 audit: BPF prog-id=72 op=UNLOAD Jan 20 01:33:37.326000 audit: BPF prog-id=73 op=UNLOAD Jan 20 01:33:37.328000 audit: BPF prog-id=122 op=LOAD Jan 20 01:33:37.355000 audit: BPF prog-id=78 op=UNLOAD Jan 20 01:33:37.355000 audit: BPF prog-id=123 op=LOAD Jan 20 01:33:37.355000 audit: BPF prog-id=124 op=LOAD Jan 20 01:33:37.355000 audit: BPF prog-id=79 op=UNLOAD Jan 20 01:33:37.355000 audit: BPF prog-id=80 op=UNLOAD Jan 20 01:33:37.356000 audit: BPF prog-id=125 op=LOAD Jan 20 01:33:37.356000 audit: BPF prog-id=74 op=UNLOAD Jan 20 01:33:37.356000 audit: BPF prog-id=126 op=LOAD Jan 20 01:33:37.356000 audit: BPF prog-id=127 op=LOAD Jan 20 01:33:37.357000 audit: BPF prog-id=75 op=UNLOAD Jan 20 01:33:37.357000 audit: BPF prog-id=76 op=UNLOAD Jan 20 01:33:37.358000 audit: BPF prog-id=128 op=LOAD Jan 20 01:33:37.358000 audit: BPF prog-id=62 op=UNLOAD Jan 20 01:33:37.359000 audit: BPF prog-id=129 op=LOAD Jan 20 01:33:37.359000 audit: BPF prog-id=130 op=LOAD Jan 20 01:33:37.359000 audit: BPF prog-id=69 op=UNLOAD Jan 20 01:33:37.359000 audit: BPF prog-id=70 op=UNLOAD Jan 20 01:33:37.540066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:33:37.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:37.544889 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:33:37.608666 kubelet[2780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:33:37.609891 kubelet[2780]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:33:37.610022 kubelet[2780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:33:37.610642 kubelet[2780]: I0120 01:33:37.610537 2780 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:33:37.619563 kubelet[2780]: I0120 01:33:37.619426 2780 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 01:33:37.619563 kubelet[2780]: I0120 01:33:37.619448 2780 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:33:37.619765 kubelet[2780]: I0120 01:33:37.619702 2780 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 01:33:37.621343 kubelet[2780]: I0120 01:33:37.621305 2780 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 01:33:37.624509 kubelet[2780]: I0120 01:33:37.624436 2780 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:33:37.630584 kubelet[2780]: I0120 01:33:37.630553 2780 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:33:37.638883 kubelet[2780]: I0120 01:33:37.638835 2780 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:33:37.639323 kubelet[2780]: I0120 01:33:37.639198 2780 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:33:37.639464 kubelet[2780]: I0120 01:33:37.639285 2780 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:33:37.639464 kubelet[2780]: I0120 01:33:37.639452 2780 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:33:37.639464 kubelet[2780]: I0120 01:33:37.639461 2780 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 01:33:37.639639 kubelet[2780]: I0120 01:33:37.639515 2780 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:33:37.639864 kubelet[2780]: I0120 01:33:37.639794 2780 kubelet.go:446] "Attempting to sync node with API server" Jan 20 01:33:37.639905 kubelet[2780]: I0120 01:33:37.639880 2780 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:33:37.639905 kubelet[2780]: I0120 01:33:37.639904 2780 kubelet.go:352] "Adding apiserver pod source" Jan 20 01:33:37.639942 kubelet[2780]: I0120 01:33:37.639914 2780 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:33:37.642769 kubelet[2780]: I0120 01:33:37.640644 2780 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 01:33:37.642769 kubelet[2780]: I0120 01:33:37.641594 2780 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 01:33:37.643055 kubelet[2780]: I0120 01:33:37.642990 2780 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:33:37.643055 kubelet[2780]: I0120 01:33:37.643034 2780 server.go:1287] "Started kubelet" Jan 20 01:33:37.646471 kubelet[2780]: I0120 01:33:37.645492 2780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:33:37.647463 kubelet[2780]: I0120 01:33:37.646067 2780 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:33:37.649394 kubelet[2780]: E0120 01:33:37.649260 2780 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:33:37.650442 kubelet[2780]: I0120 01:33:37.649604 2780 server.go:479] "Adding debug handlers to kubelet server" Jan 20 01:33:37.650583 kubelet[2780]: I0120 01:33:37.650556 2780 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:33:37.651074 kubelet[2780]: I0120 01:33:37.650796 2780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:33:37.652995 kubelet[2780]: I0120 01:33:37.652956 2780 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:33:37.654390 kubelet[2780]: I0120 01:33:37.654342 2780 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:33:37.654615 kubelet[2780]: E0120 01:33:37.654583 2780 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:33:37.654901 kubelet[2780]: I0120 01:33:37.654866 2780 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:33:37.655864 kubelet[2780]: I0120 01:33:37.655041 2780 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:33:37.658815 kubelet[2780]: I0120 01:33:37.657448 2780 factory.go:221] Registration of the systemd container factory successfully Jan 20 01:33:37.658815 kubelet[2780]: I0120 01:33:37.657552 2780 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:33:37.660503 kubelet[2780]: I0120 01:33:37.660483 2780 factory.go:221] Registration of the containerd container factory successfully Jan 20 01:33:37.677951 kubelet[2780]: I0120 01:33:37.677912 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 01:33:37.680630 kubelet[2780]: I0120 01:33:37.680609 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 01:33:37.680802 kubelet[2780]: I0120 01:33:37.680787 2780 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 01:33:37.684989 kubelet[2780]: I0120 01:33:37.680936 2780 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:33:37.697360 kubelet[2780]: I0120 01:33:37.696285 2780 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 01:33:37.739001 kubelet[2780]: E0120 01:33:37.733343 2780 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:33:37.877260 kubelet[2780]: E0120 01:33:37.867308 2780 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:33:38.067581 kubelet[2780]: E0120 01:33:38.067453 2780 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:33:38.069879 kubelet[2780]: I0120 01:33:38.069530 2780 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:33:38.069879 kubelet[2780]: I0120 01:33:38.069550 2780 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:33:38.069879 kubelet[2780]: I0120 01:33:38.069569 2780 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:33:38.069879 kubelet[2780]: I0120 01:33:38.069800 2780 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:33:38.069879 kubelet[2780]: I0120 01:33:38.069811 2780 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:33:38.069879 kubelet[2780]: I0120 01:33:38.069828 2780 policy_none.go:49] "None policy: Start" Jan 20 01:33:38.069879 kubelet[2780]: I0120 01:33:38.069839 2780 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:33:38.069879 kubelet[2780]: I0120 01:33:38.069850 2780 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:33:38.070273 kubelet[2780]: I0120 01:33:38.069964 2780 state_mem.go:75] "Updated machine memory state" Jan 20 01:33:38.085778 kubelet[2780]: I0120 01:33:38.084975 2780 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 01:33:38.085778 kubelet[2780]: I0120 01:33:38.085302 2780 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:33:38.085778 kubelet[2780]: I0120 01:33:38.085317 2780 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:33:38.085778 kubelet[2780]: I0120 01:33:38.085656 2780 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:33:38.089477 kubelet[2780]: E0120 01:33:38.089445 2780 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:33:38.204790 kubelet[2780]: I0120 01:33:38.204312 2780 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:33:38.220020 kubelet[2780]: I0120 01:33:38.219962 2780 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 01:33:38.220216 kubelet[2780]: I0120 01:33:38.220129 2780 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:33:38.468949 kubelet[2780]: I0120 01:33:38.468783 2780 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 01:33:38.469607 kubelet[2780]: I0120 01:33:38.469502 2780 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:38.471548 kubelet[2780]: I0120 01:33:38.471453 2780 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:33:38.603535 kubelet[2780]: I0120 01:33:38.602965 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:38.614630 kubelet[2780]: I0120 01:33:38.614557 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:38.614630 kubelet[2780]: I0120 01:33:38.614610 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:38.615577 kubelet[2780]: I0120 01:33:38.614640 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:33:38.615577 kubelet[2780]: I0120 01:33:38.614663 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c637b139afc147f6d50e8833168857b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c637b139afc147f6d50e8833168857b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:33:38.615577 kubelet[2780]: I0120 01:33:38.614685 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c637b139afc147f6d50e8833168857b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c637b139afc147f6d50e8833168857b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:33:38.615577 kubelet[2780]: I0120 01:33:38.614708 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c637b139afc147f6d50e8833168857b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c637b139afc147f6d50e8833168857b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:33:38.615577 kubelet[2780]: I0120 01:33:38.614967 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:38.615779 kubelet[2780]: I0120 01:33:38.614991 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:33:38.641129 kubelet[2780]: I0120 01:33:38.640693 2780 apiserver.go:52] "Watching apiserver" Jan 20 01:33:38.655168 kubelet[2780]: I0120 01:33:38.655076 2780 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:33:38.781250 kubelet[2780]: E0120 01:33:38.781052 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:38.787371 kubelet[2780]: E0120 01:33:38.787244 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:38.788180 kubelet[2780]: E0120 01:33:38.787941 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:38.835425 kubelet[2780]: I0120 01:33:38.835009 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.834986362 podStartE2EDuration="834.986362ms" podCreationTimestamp="2026-01-20 01:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:33:38.815955826 +0000 UTC m=+1.260466625" watchObservedRunningTime="2026-01-20 01:33:38.834986362 +0000 UTC m=+1.279497160" Jan 20 01:33:38.853880 kubelet[2780]: I0120 01:33:38.853445 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.853427656 podStartE2EDuration="853.427656ms" podCreationTimestamp="2026-01-20 01:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:33:38.835491813 +0000 UTC m=+1.280002612" watchObservedRunningTime="2026-01-20 01:33:38.853427656 +0000 UTC m=+1.297938455" Jan 20 01:33:38.854445 kubelet[2780]: I0120 01:33:38.854187 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.854176209 podStartE2EDuration="854.176209ms" podCreationTimestamp="2026-01-20 01:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:33:38.853063883 +0000 UTC m=+1.297574691" watchObservedRunningTime="2026-01-20 01:33:38.854176209 +0000 UTC m=+1.298687028" Jan 20 01:33:39.050438 kubelet[2780]: E0120 01:33:39.049807 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:39.050438 kubelet[2780]: E0120 01:33:39.050141 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:39.051060 kubelet[2780]: E0120 01:33:39.050911 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:40.052151 kubelet[2780]: E0120 01:33:40.051995 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:40.052151 kubelet[2780]: E0120 01:33:40.052186 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:41.054011 kubelet[2780]: E0120 01:33:41.053932 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:42.679532 kubelet[2780]: I0120 01:33:42.679188 2780 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:33:42.680898 kubelet[2780]: I0120 01:33:42.680185 2780 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:33:42.680944 containerd[1611]: time="2026-01-20T01:33:42.679904080Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:33:43.319147 kubelet[2780]: E0120 01:33:43.318929 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:43.593809 systemd[1]: Created slice kubepods-besteffort-pod02f4b4e2_69e4_4af7_a994_306b4b32090f.slice - libcontainer container kubepods-besteffort-pod02f4b4e2_69e4_4af7_a994_306b4b32090f.slice. Jan 20 01:33:43.651654 kubelet[2780]: I0120 01:33:43.651543 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02f4b4e2-69e4-4af7-a994-306b4b32090f-lib-modules\") pod \"kube-proxy-qtvzc\" (UID: \"02f4b4e2-69e4-4af7-a994-306b4b32090f\") " pod="kube-system/kube-proxy-qtvzc" Jan 20 01:33:43.651654 kubelet[2780]: I0120 01:33:43.651637 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02f4b4e2-69e4-4af7-a994-306b4b32090f-kube-proxy\") pod \"kube-proxy-qtvzc\" (UID: \"02f4b4e2-69e4-4af7-a994-306b4b32090f\") " pod="kube-system/kube-proxy-qtvzc" Jan 20 01:33:43.651654 kubelet[2780]: I0120 01:33:43.651660 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02f4b4e2-69e4-4af7-a994-306b4b32090f-xtables-lock\") pod \"kube-proxy-qtvzc\" (UID: \"02f4b4e2-69e4-4af7-a994-306b4b32090f\") " pod="kube-system/kube-proxy-qtvzc" Jan 20 01:33:43.651939 kubelet[2780]: I0120 01:33:43.651758 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcw5w\" (UniqueName: \"kubernetes.io/projected/02f4b4e2-69e4-4af7-a994-306b4b32090f-kube-api-access-jcw5w\") pod \"kube-proxy-qtvzc\" (UID: \"02f4b4e2-69e4-4af7-a994-306b4b32090f\") " pod="kube-system/kube-proxy-qtvzc" Jan 20 01:33:43.813390 systemd[1]: Created slice kubepods-besteffort-poda348920d_233d_4177_a1b0_d8724d8e2716.slice - libcontainer container kubepods-besteffort-poda348920d_233d_4177_a1b0_d8724d8e2716.slice. Jan 20 01:33:43.911518 kubelet[2780]: E0120 01:33:43.911297 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:43.912475 containerd[1611]: time="2026-01-20T01:33:43.912316711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qtvzc,Uid:02f4b4e2-69e4-4af7-a994-306b4b32090f,Namespace:kube-system,Attempt:0,}" Jan 20 01:33:43.953559 kubelet[2780]: I0120 01:33:43.953492 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a348920d-233d-4177-a1b0-d8724d8e2716-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jnfgc\" (UID: \"a348920d-233d-4177-a1b0-d8724d8e2716\") " pod="tigera-operator/tigera-operator-7dcd859c48-jnfgc" Jan 20 01:33:43.953738 kubelet[2780]: I0120 01:33:43.953568 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrrwz\" (UniqueName: \"kubernetes.io/projected/a348920d-233d-4177-a1b0-d8724d8e2716-kube-api-access-hrrwz\") pod \"tigera-operator-7dcd859c48-jnfgc\" (UID: \"a348920d-233d-4177-a1b0-d8724d8e2716\") " pod="tigera-operator/tigera-operator-7dcd859c48-jnfgc" Jan 20 01:33:43.963282 containerd[1611]: time="2026-01-20T01:33:43.963159929Z" level=info msg="connecting to shim c5e63a109f2f0447f56245437620da3dd792c35e33fd7412b9c29bf9eee1a663" address="unix:///run/containerd/s/3b07e519ab4861a6bf07d934585b0df89b9922b5e5536592bf412ecdc22d22ac" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:33:44.062415 systemd[1]: Started cri-containerd-c5e63a109f2f0447f56245437620da3dd792c35e33fd7412b9c29bf9eee1a663.scope - libcontainer container c5e63a109f2f0447f56245437620da3dd792c35e33fd7412b9c29bf9eee1a663. Jan 20 01:33:44.062935 kubelet[2780]: E0120 01:33:44.062803 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:44.085161 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 01:33:44.085270 kernel: audit: type=1334 audit(1768872824.079:418): prog-id=131 op=LOAD Jan 20 01:33:44.079000 audit: BPF prog-id=131 op=LOAD Jan 20 01:33:44.084000 audit: BPF prog-id=132 op=LOAD Jan 20 01:33:44.087912 kernel: audit: type=1334 audit(1768872824.084:419): prog-id=132 op=LOAD Jan 20 01:33:44.087955 kernel: audit: type=1300 audit(1768872824.084:419): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2843 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.084000 audit[2854]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2843 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.096890 kernel: audit: type=1327 audit(1768872824.084:419): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653633613130396632663034343766353632343534333736323064 Jan 20 01:33:44.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653633613130396632663034343766353632343534333736323064 Jan 20 01:33:44.084000 audit: BPF prog-id=132 op=UNLOAD Jan 20 01:33:44.107753 kernel: audit: type=1334 audit(1768872824.084:420): prog-id=132 op=UNLOAD Jan 20 01:33:44.107798 kernel: audit: type=1300 audit(1768872824.084:420): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2843 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.084000 audit[2854]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2843 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.118918 kernel: audit: type=1327 audit(1768872824.084:420): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653633613130396632663034343766353632343534333736323064 Jan 20 01:33:44.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653633613130396632663034343766353632343534333736323064 Jan 20 01:33:44.131930 kernel: audit: type=1334 audit(1768872824.084:421): prog-id=133 op=LOAD Jan 20 01:33:44.134568 kernel: audit: type=1300 audit(1768872824.084:421): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2843 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.084000 audit: BPF prog-id=133 op=LOAD Jan 20 01:33:44.084000 audit[2854]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2843 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.135159 containerd[1611]: time="2026-01-20T01:33:44.134854854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jnfgc,Uid:a348920d-233d-4177-a1b0-d8724d8e2716,Namespace:tigera-operator,Attempt:0,}" Jan 20 01:33:44.139171 kernel: audit: type=1327 audit(1768872824.084:421): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653633613130396632663034343766353632343534333736323064 Jan 20 01:33:44.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653633613130396632663034343766353632343534333736323064 Jan 20 01:33:44.084000 audit: BPF prog-id=134 op=LOAD Jan 20 01:33:44.084000 audit[2854]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2843 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653633613130396632663034343766353632343534333736323064 Jan 20 01:33:44.085000 audit: BPF prog-id=134 op=UNLOAD Jan 20 01:33:44.085000 audit[2854]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2843 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653633613130396632663034343766353632343534333736323064 Jan 20 01:33:44.085000 audit: BPF prog-id=133 op=UNLOAD Jan 20 01:33:44.085000 audit[2854]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2843 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653633613130396632663034343766353632343534333736323064 Jan 20 01:33:44.085000 audit: BPF prog-id=135 op=LOAD Jan 20 01:33:44.085000 audit[2854]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2843 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653633613130396632663034343766353632343534333736323064 Jan 20 01:33:44.181569 containerd[1611]: time="2026-01-20T01:33:44.180255306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qtvzc,Uid:02f4b4e2-69e4-4af7-a994-306b4b32090f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5e63a109f2f0447f56245437620da3dd792c35e33fd7412b9c29bf9eee1a663\"" Jan 20 01:33:44.186257 kubelet[2780]: E0120 01:33:44.186053 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:44.198981 containerd[1611]: time="2026-01-20T01:33:44.196618993Z" level=info msg="CreateContainer within sandbox \"c5e63a109f2f0447f56245437620da3dd792c35e33fd7412b9c29bf9eee1a663\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:33:44.210070 containerd[1611]: time="2026-01-20T01:33:44.210004785Z" level=info msg="connecting to shim f7ce9f9cd3189ce53fb38c98d4d65264c32d1687cc174a5c37d633fdc927e94a" address="unix:///run/containerd/s/02c7c9756dd1649030820b28a80642ced63bdb698bcb6ab64e1dbd72f05e5fed" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:33:44.219306 containerd[1611]: time="2026-01-20T01:33:44.219224812Z" level=info msg="Container 201b544fa2ad0ea96c1a77e26830077dfa28feb70b7512a03c5f72afed5cce47: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:33:44.229389 containerd[1611]: time="2026-01-20T01:33:44.229340100Z" level=info msg="CreateContainer within sandbox \"c5e63a109f2f0447f56245437620da3dd792c35e33fd7412b9c29bf9eee1a663\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"201b544fa2ad0ea96c1a77e26830077dfa28feb70b7512a03c5f72afed5cce47\"" Jan 20 01:33:44.231148 containerd[1611]: time="2026-01-20T01:33:44.230491304Z" level=info msg="StartContainer for \"201b544fa2ad0ea96c1a77e26830077dfa28feb70b7512a03c5f72afed5cce47\"" Jan 20 01:33:44.235752 containerd[1611]: time="2026-01-20T01:33:44.235666538Z" level=info msg="connecting to shim 201b544fa2ad0ea96c1a77e26830077dfa28feb70b7512a03c5f72afed5cce47" address="unix:///run/containerd/s/3b07e519ab4861a6bf07d934585b0df89b9922b5e5536592bf412ecdc22d22ac" protocol=ttrpc version=3 Jan 20 01:33:44.247326 systemd[1]: Started cri-containerd-f7ce9f9cd3189ce53fb38c98d4d65264c32d1687cc174a5c37d633fdc927e94a.scope - libcontainer container f7ce9f9cd3189ce53fb38c98d4d65264c32d1687cc174a5c37d633fdc927e94a. Jan 20 01:33:44.301457 update_engine[1580]: I20260120 01:33:44.301295 1580 update_attempter.cc:509] Updating boot flags... Jan 20 01:33:44.303335 systemd[1]: Started cri-containerd-201b544fa2ad0ea96c1a77e26830077dfa28feb70b7512a03c5f72afed5cce47.scope - libcontainer container 201b544fa2ad0ea96c1a77e26830077dfa28feb70b7512a03c5f72afed5cce47. Jan 20 01:33:44.305000 audit: BPF prog-id=136 op=LOAD Jan 20 01:33:44.306000 audit: BPF prog-id=137 op=LOAD Jan 20 01:33:44.306000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2888 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636539663963643331383963653533666233386339386434643635 Jan 20 01:33:44.306000 audit: BPF prog-id=137 op=UNLOAD Jan 20 01:33:44.306000 audit[2902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636539663963643331383963653533666233386339386434643635 Jan 20 01:33:44.306000 audit: BPF prog-id=138 op=LOAD Jan 20 01:33:44.306000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2888 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636539663963643331383963653533666233386339386434643635 Jan 20 01:33:44.307000 audit: BPF prog-id=139 op=LOAD Jan 20 01:33:44.307000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2888 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636539663963643331383963653533666233386339386434643635 Jan 20 01:33:44.307000 audit: BPF prog-id=139 op=UNLOAD Jan 20 01:33:44.307000 audit[2902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636539663963643331383963653533666233386339386434643635 Jan 20 01:33:44.307000 audit: BPF prog-id=138 op=UNLOAD Jan 20 01:33:44.307000 audit[2902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636539663963643331383963653533666233386339386434643635 Jan 20 01:33:44.307000 audit: BPF prog-id=140 op=LOAD Jan 20 01:33:44.307000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2888 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636539663963643331383963653533666233386339386434643635 Jan 20 01:33:44.457048 containerd[1611]: time="2026-01-20T01:33:44.456664314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jnfgc,Uid:a348920d-233d-4177-a1b0-d8724d8e2716,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f7ce9f9cd3189ce53fb38c98d4d65264c32d1687cc174a5c37d633fdc927e94a\"" Jan 20 01:33:44.464071 containerd[1611]: time="2026-01-20T01:33:44.463908390Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 01:33:44.555000 audit: BPF prog-id=141 op=LOAD Jan 20 01:33:44.555000 audit[2914]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000034488 a2=98 a3=0 items=0 ppid=2843 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230316235343466613261643065613936633161373765323638333030 Jan 20 01:33:44.555000 audit: BPF prog-id=142 op=LOAD Jan 20 01:33:44.555000 audit[2914]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000034218 a2=98 a3=0 items=0 ppid=2843 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230316235343466613261643065613936633161373765323638333030 Jan 20 01:33:44.555000 audit: BPF prog-id=142 op=UNLOAD Jan 20 01:33:44.555000 audit[2914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2843 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230316235343466613261643065613936633161373765323638333030 Jan 20 01:33:44.556000 audit: BPF prog-id=141 op=UNLOAD Jan 20 01:33:44.556000 audit[2914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2843 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230316235343466613261643065613936633161373765323638333030 Jan 20 01:33:44.556000 audit: BPF prog-id=143 op=LOAD Jan 20 01:33:44.556000 audit[2914]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000346e8 a2=98 a3=0 items=0 ppid=2843 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230316235343466613261643065613936633161373765323638333030 Jan 20 01:33:44.595123 containerd[1611]: time="2026-01-20T01:33:44.594800853Z" level=info msg="StartContainer for \"201b544fa2ad0ea96c1a77e26830077dfa28feb70b7512a03c5f72afed5cce47\" returns successfully" Jan 20 01:33:44.818000 audit[3011]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:44.818000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe23999bd0 a2=0 a3=7ffe23999bbc items=0 ppid=2936 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 01:33:44.819000 audit[3010]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.819000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe877ebeb0 a2=0 a3=7ffe877ebe9c items=0 ppid=2936 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 01:33:44.821000 audit[3012]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:44.821000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9b1f7c60 a2=0 a3=7ffe9b1f7c4c items=0 ppid=2936 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 01:33:44.821000 audit[3013]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.821000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe6cda990 a2=0 a3=7fffe6cda97c items=0 ppid=2936 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 01:33:44.823000 audit[3014]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.823000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2d9b3430 a2=0 a3=7ffe2d9b341c items=0 ppid=2936 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 01:33:44.824000 audit[3015]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:44.824000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed9080620 a2=0 a3=f6114948fe0d609c items=0 ppid=2936 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 01:33:44.925000 audit[3016]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.925000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffee85d0700 a2=0 a3=7ffee85d06ec items=0 ppid=2936 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 01:33:44.931000 audit[3018]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.931000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffffa498960 a2=0 a3=7ffffa49894c items=0 ppid=2936 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 20 01:33:44.938000 audit[3021]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.938000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffc3c14720 a2=0 a3=7fffc3c1470c items=0 ppid=2936 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.938000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 20 01:33:44.941000 audit[3022]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.941000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda421e100 a2=0 a3=7ffda421e0ec items=0 ppid=2936 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 01:33:44.946000 audit[3024]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.946000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd1dec4d10 a2=0 a3=7ffd1dec4cfc items=0 ppid=2936 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.946000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 01:33:44.948000 audit[3025]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.948000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf1f7fbd0 a2=0 a3=7ffcf1f7fbbc items=0 ppid=2936 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.948000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 01:33:44.953000 audit[3027]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.953000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd794e7050 a2=0 a3=7ffd794e703c items=0 ppid=2936 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 01:33:44.961000 audit[3030]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.961000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffed91146c0 a2=0 a3=7ffed91146ac items=0 ppid=2936 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.961000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 20 01:33:44.963000 audit[3031]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.963000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe231df440 a2=0 a3=7ffe231df42c items=0 ppid=2936 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.963000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 01:33:44.968000 audit[3033]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.968000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe800caee0 a2=0 a3=7ffe800caecc items=0 ppid=2936 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 01:33:44.970000 audit[3034]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.970000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc58f3f700 a2=0 a3=7ffc58f3f6ec items=0 ppid=2936 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.970000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 01:33:44.975000 audit[3036]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.975000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffea6eea5e0 a2=0 a3=7ffea6eea5cc items=0 ppid=2936 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.975000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 01:33:44.982000 audit[3039]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.982000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8428da30 a2=0 a3=7ffe8428da1c items=0 ppid=2936 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 01:33:44.990000 audit[3042]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.990000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff18d85da0 a2=0 a3=7fff18d85d8c items=0 ppid=2936 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.990000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 01:33:44.992000 audit[3043]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.992000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5bf6f470 a2=0 a3=7ffd5bf6f45c items=0 ppid=2936 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.992000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 01:33:44.997000 audit[3045]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:44.997000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffb2ff7c40 a2=0 a3=7fffb2ff7c2c items=0 ppid=2936 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:44.997000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:33:45.004000 audit[3048]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:45.004000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1944de90 a2=0 a3=7fff1944de7c items=0 ppid=2936 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:33:45.006000 audit[3049]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:45.006000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8fbde640 a2=0 a3=7ffe8fbde62c items=0 ppid=2936 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.006000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 01:33:45.011000 audit[3051]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:33:45.011000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc09298cb0 a2=0 a3=7ffc09298c9c items=0 ppid=2936 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.011000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 01:33:45.046000 audit[3057]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:45.046000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd0f63d640 a2=0 a3=7ffd0f63d62c items=0 ppid=2936 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.046000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:45.060000 audit[3057]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:45.060000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd0f63d640 a2=0 a3=7ffd0f63d62c items=0 ppid=2936 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:45.063000 audit[3062]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.063000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe25d1ba50 a2=0 a3=7ffe25d1ba3c items=0 ppid=2936 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 01:33:45.067330 kubelet[2780]: E0120 01:33:45.067262 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:45.070000 audit[3064]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.070000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffefdac0330 a2=0 a3=7ffefdac031c items=0 ppid=2936 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 20 01:33:45.078000 audit[3067]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.078000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdfbc89d40 a2=0 a3=7ffdfbc89d2c items=0 ppid=2936 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 20 01:33:45.081000 audit[3068]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.081000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe27ff3e00 a2=0 a3=7ffe27ff3dec items=0 ppid=2936 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 01:33:45.084898 kubelet[2780]: I0120 01:33:45.084770 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qtvzc" podStartSLOduration=2.084751842 podStartE2EDuration="2.084751842s" podCreationTimestamp="2026-01-20 01:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:33:45.084519479 +0000 UTC m=+7.529030287" watchObservedRunningTime="2026-01-20 01:33:45.084751842 +0000 UTC m=+7.529262640" Jan 20 01:33:45.088000 audit[3070]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.088000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc9733ffa0 a2=0 a3=7ffc9733ff8c items=0 ppid=2936 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 01:33:45.090000 audit[3071]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.090000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffbd1a690 a2=0 a3=7ffffbd1a67c items=0 ppid=2936 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 01:33:45.095000 audit[3073]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.095000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe7318efd0 a2=0 a3=7ffe7318efbc items=0 ppid=2936 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 20 01:33:45.102000 audit[3076]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.102000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc670f61e0 a2=0 a3=7ffc670f61cc items=0 ppid=2936 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 01:33:45.105000 audit[3077]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.105000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8e32ce90 a2=0 a3=7fff8e32ce7c items=0 ppid=2936 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.105000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 01:33:45.110000 audit[3079]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.110000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffed1066c20 a2=0 a3=7ffed1066c0c items=0 ppid=2936 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.110000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 01:33:45.113000 audit[3080]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.113000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc59d43540 a2=0 a3=7ffc59d4352c items=0 ppid=2936 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.113000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 01:33:45.118000 audit[3082]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.118000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd6445100 a2=0 a3=7ffdd64450ec items=0 ppid=2936 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.118000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 01:33:45.125000 audit[3085]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.125000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff7eda3290 a2=0 a3=7fff7eda327c items=0 ppid=2936 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.125000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 01:33:45.132000 audit[3088]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.132000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9b82f1e0 a2=0 a3=7ffc9b82f1cc items=0 ppid=2936 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.132000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 20 01:33:45.135000 audit[3089]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.135000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff52b85610 a2=0 a3=7fff52b855fc items=0 ppid=2936 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.135000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 01:33:45.140000 audit[3091]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.140000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdd0e5c210 a2=0 a3=7ffdd0e5c1fc items=0 ppid=2936 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.140000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:33:45.146000 audit[3094]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.146000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca8f14500 a2=0 a3=7ffca8f144ec items=0 ppid=2936 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:33:45.148000 audit[3095]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.148000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff61457740 a2=0 a3=7fff6145772c items=0 ppid=2936 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 01:33:45.153000 audit[3097]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.153000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff8a5a69b0 a2=0 a3=7fff8a5a699c items=0 ppid=2936 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 01:33:45.155000 audit[3098]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.155000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc14aba010 a2=0 a3=7ffc14ab9ffc items=0 ppid=2936 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 01:33:45.161000 audit[3100]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.161000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe0ee773f0 a2=0 a3=7ffe0ee773dc items=0 ppid=2936 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:33:45.169000 audit[3103]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:33:45.169000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc25f4a940 a2=0 a3=7ffc25f4a92c items=0 ppid=2936 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:33:45.177000 audit[3105]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 01:33:45.177000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe9f406690 a2=0 a3=7ffe9f40667c items=0 ppid=2936 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.177000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:45.177000 audit[3105]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 01:33:45.177000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe9f406690 a2=0 a3=7ffe9f40667c items=0 ppid=2936 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:45.177000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:45.345657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403296358.mount: Deactivated successfully. Jan 20 01:33:46.682721 kubelet[2780]: E0120 01:33:46.682612 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:46.786018 containerd[1611]: time="2026-01-20T01:33:46.785870986Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:46.787211 containerd[1611]: time="2026-01-20T01:33:46.787135410Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 20 01:33:46.789788 containerd[1611]: time="2026-01-20T01:33:46.789372931Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:46.793021 containerd[1611]: time="2026-01-20T01:33:46.792960427Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:46.795723 containerd[1611]: time="2026-01-20T01:33:46.795629030Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.331661349s" Jan 20 01:33:46.795723 containerd[1611]: time="2026-01-20T01:33:46.795681428Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 01:33:46.798690 containerd[1611]: time="2026-01-20T01:33:46.798539254Z" level=info msg="CreateContainer within sandbox \"f7ce9f9cd3189ce53fb38c98d4d65264c32d1687cc174a5c37d633fdc927e94a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 01:33:46.812272 containerd[1611]: time="2026-01-20T01:33:46.812190985Z" level=info msg="Container 79e3fe51fc37af07b3853b14c58611b848af1734bc118cc02ba47848050d1856: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:33:46.844738 containerd[1611]: time="2026-01-20T01:33:46.844602261Z" level=info msg="CreateContainer within sandbox \"f7ce9f9cd3189ce53fb38c98d4d65264c32d1687cc174a5c37d633fdc927e94a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"79e3fe51fc37af07b3853b14c58611b848af1734bc118cc02ba47848050d1856\"" Jan 20 01:33:46.845536 containerd[1611]: time="2026-01-20T01:33:46.845433762Z" level=info msg="StartContainer for \"79e3fe51fc37af07b3853b14c58611b848af1734bc118cc02ba47848050d1856\"" Jan 20 01:33:46.848035 containerd[1611]: time="2026-01-20T01:33:46.847936106Z" level=info msg="connecting to shim 79e3fe51fc37af07b3853b14c58611b848af1734bc118cc02ba47848050d1856" address="unix:///run/containerd/s/02c7c9756dd1649030820b28a80642ced63bdb698bcb6ab64e1dbd72f05e5fed" protocol=ttrpc version=3 Jan 20 01:33:46.878354 systemd[1]: Started cri-containerd-79e3fe51fc37af07b3853b14c58611b848af1734bc118cc02ba47848050d1856.scope - libcontainer container 79e3fe51fc37af07b3853b14c58611b848af1734bc118cc02ba47848050d1856. Jan 20 01:33:46.901000 audit: BPF prog-id=144 op=LOAD Jan 20 01:33:46.902000 audit: BPF prog-id=145 op=LOAD Jan 20 01:33:46.902000 audit[3114]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2888 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:46.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653366653531666333376166303762333835336231346335383631 Jan 20 01:33:46.902000 audit: BPF prog-id=145 op=UNLOAD Jan 20 01:33:46.902000 audit[3114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:46.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653366653531666333376166303762333835336231346335383631 Jan 20 01:33:46.902000 audit: BPF prog-id=146 op=LOAD Jan 20 01:33:46.902000 audit[3114]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2888 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:46.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653366653531666333376166303762333835336231346335383631 Jan 20 01:33:46.903000 audit: BPF prog-id=147 op=LOAD Jan 20 01:33:46.903000 audit[3114]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2888 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:46.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653366653531666333376166303762333835336231346335383631 Jan 20 01:33:46.903000 audit: BPF prog-id=147 op=UNLOAD Jan 20 01:33:46.903000 audit[3114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:46.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653366653531666333376166303762333835336231346335383631 Jan 20 01:33:46.903000 audit: BPF prog-id=146 op=UNLOAD Jan 20 01:33:46.903000 audit[3114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:46.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653366653531666333376166303762333835336231346335383631 Jan 20 01:33:46.903000 audit: BPF prog-id=148 op=LOAD Jan 20 01:33:46.903000 audit[3114]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2888 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:46.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653366653531666333376166303762333835336231346335383631 Jan 20 01:33:46.932990 containerd[1611]: time="2026-01-20T01:33:46.932834330Z" level=info msg="StartContainer for \"79e3fe51fc37af07b3853b14c58611b848af1734bc118cc02ba47848050d1856\" returns successfully" Jan 20 01:33:47.073803 kubelet[2780]: E0120 01:33:47.073649 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:47.105955 kubelet[2780]: I0120 01:33:47.105859 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jnfgc" podStartSLOduration=1.76932065 podStartE2EDuration="4.105842356s" podCreationTimestamp="2026-01-20 01:33:43 +0000 UTC" firstStartedPulling="2026-01-20 01:33:44.460464694 +0000 UTC m=+6.904975492" lastFinishedPulling="2026-01-20 01:33:46.7969864 +0000 UTC m=+9.241497198" observedRunningTime="2026-01-20 01:33:47.105819723 +0000 UTC m=+9.550330551" watchObservedRunningTime="2026-01-20 01:33:47.105842356 +0000 UTC m=+9.550353153" Jan 20 01:33:50.414834 kubelet[2780]: E0120 01:33:50.414770 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:53.200000 audit[1828]: USER_END pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:33:53.201306 sudo[1828]: pam_unix(sudo:session): session closed for user root Jan 20 01:33:53.203888 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 20 01:33:53.203976 kernel: audit: type=1106 audit(1768872833.200:498): pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:33:53.200000 audit[1828]: CRED_DISP pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:33:53.223154 kernel: audit: type=1104 audit(1768872833.200:499): pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:33:53.229134 sshd[1827]: Connection closed by 10.0.0.1 port 40410 Jan 20 01:33:53.228419 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:53.232000 audit[1823]: USER_END pid=1823 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:53.240015 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:40410.service: Deactivated successfully. Jan 20 01:33:53.254479 kernel: audit: type=1106 audit(1768872833.232:500): pid=1823 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:53.254532 kernel: audit: type=1104 audit(1768872833.233:501): pid=1823 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:53.233000 audit[1823]: CRED_DISP pid=1823 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:33:53.243921 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:33:53.244667 systemd[1]: session-8.scope: Consumed 7.482s CPU time, 212M memory peak. Jan 20 01:33:53.246405 systemd-logind[1578]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:33:53.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.144:22-10.0.0.1:40410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:53.258423 kernel: audit: type=1131 audit(1768872833.239:502): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.144:22-10.0.0.1:40410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:33:53.261448 systemd-logind[1578]: Removed session 8. Jan 20 01:33:53.982000 audit[3205]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:53.990154 kernel: audit: type=1325 audit(1768872833.982:503): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:53.990269 kernel: audit: type=1300 audit(1768872833.982:503): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcc60e7fc0 a2=0 a3=7ffcc60e7fac items=0 ppid=2936 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:53.982000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcc60e7fc0 a2=0 a3=7ffcc60e7fac items=0 ppid=2936 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:54.009671 kernel: audit: type=1327 audit(1768872833.982:503): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:53.982000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:54.011000 audit[3205]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:54.011000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc60e7fc0 a2=0 a3=0 items=0 ppid=2936 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:54.031656 kernel: audit: type=1325 audit(1768872834.011:504): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:54.031840 kernel: audit: type=1300 audit(1768872834.011:504): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc60e7fc0 a2=0 a3=0 items=0 ppid=2936 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:54.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:54.049000 audit[3207]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:54.049000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff318e6d60 a2=0 a3=7fff318e6d4c items=0 ppid=2936 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:54.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:54.064000 audit[3207]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:54.064000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff318e6d60 a2=0 a3=0 items=0 ppid=2936 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:54.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:56.296000 audit[3209]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:56.296000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffca912ea00 a2=0 a3=7ffca912e9ec items=0 ppid=2936 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:56.296000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:56.305000 audit[3209]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:56.305000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca912ea00 a2=0 a3=0 items=0 ppid=2936 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:56.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:56.329000 audit[3211]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:56.329000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd693164e0 a2=0 a3=7ffd693164cc items=0 ppid=2936 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:56.329000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:56.340000 audit[3211]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:56.340000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd693164e0 a2=0 a3=0 items=0 ppid=2936 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:56.340000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:57.356000 audit[3213]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:57.356000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffefd29ea10 a2=0 a3=7ffefd29e9fc items=0 ppid=2936 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:57.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:57.370000 audit[3213]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:57.370000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffefd29ea10 a2=0 a3=0 items=0 ppid=2936 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:57.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:58.212240 systemd[1]: Created slice kubepods-besteffort-pod0ce76d48_7de9_41a1_a7cc_8d0ad60be698.slice - libcontainer container kubepods-besteffort-pod0ce76d48_7de9_41a1_a7cc_8d0ad60be698.slice. Jan 20 01:33:58.366710 kubelet[2780]: I0120 01:33:58.366503 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ce76d48-7de9-41a1-a7cc-8d0ad60be698-tigera-ca-bundle\") pod \"calico-typha-c8f75c7bb-7f8k6\" (UID: \"0ce76d48-7de9-41a1-a7cc-8d0ad60be698\") " pod="calico-system/calico-typha-c8f75c7bb-7f8k6" Jan 20 01:33:58.366710 kubelet[2780]: I0120 01:33:58.366576 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0ce76d48-7de9-41a1-a7cc-8d0ad60be698-typha-certs\") pod \"calico-typha-c8f75c7bb-7f8k6\" (UID: \"0ce76d48-7de9-41a1-a7cc-8d0ad60be698\") " pod="calico-system/calico-typha-c8f75c7bb-7f8k6" Jan 20 01:33:58.366710 kubelet[2780]: I0120 01:33:58.366605 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66h59\" (UniqueName: \"kubernetes.io/projected/0ce76d48-7de9-41a1-a7cc-8d0ad60be698-kube-api-access-66h59\") pod \"calico-typha-c8f75c7bb-7f8k6\" (UID: \"0ce76d48-7de9-41a1-a7cc-8d0ad60be698\") " pod="calico-system/calico-typha-c8f75c7bb-7f8k6" Jan 20 01:33:58.391000 audit[3215]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:58.397163 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 20 01:33:58.397268 kernel: audit: type=1325 audit(1768872838.391:513): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:58.408283 kernel: audit: type=1300 audit(1768872838.391:513): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc87a2d710 a2=0 a3=7ffc87a2d6fc items=0 ppid=2936 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.391000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc87a2d710 a2=0 a3=7ffc87a2d6fc items=0 ppid=2936 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:58.433853 kernel: audit: type=1327 audit(1768872838.391:513): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:58.434000 audit[3215]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:58.443269 kernel: audit: type=1325 audit(1768872838.434:514): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:33:58.434000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc87a2d710 a2=0 a3=0 items=0 ppid=2936 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.454144 systemd[1]: Created slice kubepods-besteffort-podcf71cb8f_cefd_486d_ac9b_fba425d4a7a3.slice - libcontainer container kubepods-besteffort-podcf71cb8f_cefd_486d_ac9b_fba425d4a7a3.slice. Jan 20 01:33:58.434000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:58.463484 kernel: audit: type=1300 audit(1768872838.434:514): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc87a2d710 a2=0 a3=0 items=0 ppid=2936 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.463595 kernel: audit: type=1327 audit(1768872838.434:514): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:33:58.518878 kubelet[2780]: E0120 01:33:58.518782 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:58.521012 containerd[1611]: time="2026-01-20T01:33:58.520059359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c8f75c7bb-7f8k6,Uid:0ce76d48-7de9-41a1-a7cc-8d0ad60be698,Namespace:calico-system,Attempt:0,}" Jan 20 01:33:58.568949 kubelet[2780]: I0120 01:33:58.568734 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-node-certs\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.568949 kubelet[2780]: I0120 01:33:58.568788 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-var-lib-calico\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.568949 kubelet[2780]: I0120 01:33:58.568810 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-var-run-calico\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.568949 kubelet[2780]: I0120 01:33:58.568839 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-flexvol-driver-host\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.568949 kubelet[2780]: I0120 01:33:58.568872 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf56r\" (UniqueName: \"kubernetes.io/projected/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-kube-api-access-wf56r\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.571495 kubelet[2780]: I0120 01:33:58.568938 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-cni-bin-dir\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.571495 kubelet[2780]: I0120 01:33:58.568981 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-tigera-ca-bundle\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.571495 kubelet[2780]: I0120 01:33:58.569011 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-xtables-lock\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.571495 kubelet[2780]: I0120 01:33:58.569042 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-cni-log-dir\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.571495 kubelet[2780]: I0120 01:33:58.569073 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-cni-net-dir\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.573031 kubelet[2780]: I0120 01:33:58.569166 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-lib-modules\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.573031 kubelet[2780]: I0120 01:33:58.569195 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cf71cb8f-cefd-486d-ac9b-fba425d4a7a3-policysync\") pod \"calico-node-bpvml\" (UID: \"cf71cb8f-cefd-486d-ac9b-fba425d4a7a3\") " pod="calico-system/calico-node-bpvml" Jan 20 01:33:58.575838 containerd[1611]: time="2026-01-20T01:33:58.575740146Z" level=info msg="connecting to shim 18ce1e281a42744550a85373ed7b252daff138ceeb45e1d424d249d5c78e2c1b" address="unix:///run/containerd/s/1452cf6da7e1fcb57e980515619623e16f307cf26459fc6820af71ea0a515987" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:33:58.639825 kubelet[2780]: E0120 01:33:58.639588 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:33:58.647506 systemd[1]: Started cri-containerd-18ce1e281a42744550a85373ed7b252daff138ceeb45e1d424d249d5c78e2c1b.scope - libcontainer container 18ce1e281a42744550a85373ed7b252daff138ceeb45e1d424d249d5c78e2c1b. Jan 20 01:33:58.678000 audit: BPF prog-id=149 op=LOAD Jan 20 01:33:58.683130 kernel: audit: type=1334 audit(1768872838.678:515): prog-id=149 op=LOAD Jan 20 01:33:58.682000 audit: BPF prog-id=150 op=LOAD Jan 20 01:33:58.687994 kernel: audit: type=1334 audit(1768872838.682:516): prog-id=150 op=LOAD Jan 20 01:33:58.688059 kernel: audit: type=1300 audit(1768872838.682:516): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3226 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.682000 audit[3237]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3226 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.688768 kubelet[2780]: E0120 01:33:58.688373 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.689532 kubelet[2780]: W0120 01:33:58.689258 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.692489 kubelet[2780]: E0120 01:33:58.691332 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.696000 kubelet[2780]: E0120 01:33:58.695796 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.696000 kubelet[2780]: W0120 01:33:58.695915 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.696000 kubelet[2780]: E0120 01:33:58.695936 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.697861 kubelet[2780]: E0120 01:33:58.697657 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.697861 kubelet[2780]: W0120 01:33:58.697673 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.697861 kubelet[2780]: E0120 01:33:58.697728 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.698607 kubelet[2780]: E0120 01:33:58.698590 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.698818 kubelet[2780]: W0120 01:33:58.698800 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.699042 kubelet[2780]: E0120 01:33:58.698959 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.699988 kubelet[2780]: E0120 01:33:58.699971 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.700063 kubelet[2780]: W0120 01:33:58.700049 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.700442 kubelet[2780]: E0120 01:33:58.700167 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.702317 kubelet[2780]: E0120 01:33:58.702300 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.702471 kubelet[2780]: W0120 01:33:58.702454 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.702546 kubelet[2780]: E0120 01:33:58.702532 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.704516 kubelet[2780]: E0120 01:33:58.704379 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.704516 kubelet[2780]: W0120 01:33:58.704396 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.704516 kubelet[2780]: E0120 01:33:58.704411 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.704947 kubelet[2780]: E0120 01:33:58.704840 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.705473 kubelet[2780]: W0120 01:33:58.705453 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.705757 kubelet[2780]: E0120 01:33:58.705738 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.706782 kubelet[2780]: E0120 01:33:58.706637 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.706887 kubelet[2780]: W0120 01:33:58.706770 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.707001 kubelet[2780]: E0120 01:33:58.706891 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.708149 kubelet[2780]: E0120 01:33:58.708040 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.708149 kubelet[2780]: W0120 01:33:58.708064 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.708256 kubelet[2780]: E0120 01:33:58.708179 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138636531653238316134323734343535306138353337336564376232 Jan 20 01:33:58.718419 kubelet[2780]: E0120 01:33:58.708896 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.718419 kubelet[2780]: W0120 01:33:58.708909 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.718419 kubelet[2780]: E0120 01:33:58.708926 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.718419 kubelet[2780]: E0120 01:33:58.709375 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.718419 kubelet[2780]: W0120 01:33:58.709386 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.718419 kubelet[2780]: E0120 01:33:58.709398 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.718419 kubelet[2780]: E0120 01:33:58.709852 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.718419 kubelet[2780]: W0120 01:33:58.709863 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.718419 kubelet[2780]: E0120 01:33:58.709874 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.718419 kubelet[2780]: E0120 01:33:58.710323 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.719513 kubelet[2780]: W0120 01:33:58.710387 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.719513 kubelet[2780]: E0120 01:33:58.710398 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.719513 kubelet[2780]: E0120 01:33:58.710808 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.719513 kubelet[2780]: W0120 01:33:58.710817 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.719513 kubelet[2780]: E0120 01:33:58.710826 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.719513 kubelet[2780]: E0120 01:33:58.711283 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.719513 kubelet[2780]: W0120 01:33:58.711295 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.719513 kubelet[2780]: E0120 01:33:58.711307 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.719513 kubelet[2780]: E0120 01:33:58.711582 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.719513 kubelet[2780]: W0120 01:33:58.711593 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.720271 kernel: audit: type=1327 audit(1768872838.682:516): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138636531653238316134323734343535306138353337336564376232 Jan 20 01:33:58.720317 kubelet[2780]: E0120 01:33:58.711604 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.720317 kubelet[2780]: E0120 01:33:58.711890 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.720317 kubelet[2780]: W0120 01:33:58.711902 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.720317 kubelet[2780]: E0120 01:33:58.711912 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.720317 kubelet[2780]: E0120 01:33:58.712205 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.720317 kubelet[2780]: W0120 01:33:58.712215 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.720317 kubelet[2780]: E0120 01:33:58.712225 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.720317 kubelet[2780]: E0120 01:33:58.712651 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.720317 kubelet[2780]: W0120 01:33:58.712662 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.720317 kubelet[2780]: E0120 01:33:58.712673 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.720658 kubelet[2780]: E0120 01:33:58.712993 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.720658 kubelet[2780]: W0120 01:33:58.713004 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.720658 kubelet[2780]: E0120 01:33:58.713015 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.720658 kubelet[2780]: E0120 01:33:58.714042 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.720658 kubelet[2780]: W0120 01:33:58.714054 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.720658 kubelet[2780]: E0120 01:33:58.714065 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.682000 audit: BPF prog-id=150 op=UNLOAD Jan 20 01:33:58.682000 audit[3237]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3226 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138636531653238316134323734343535306138353337336564376232 Jan 20 01:33:58.682000 audit: BPF prog-id=151 op=LOAD Jan 20 01:33:58.682000 audit[3237]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3226 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138636531653238316134323734343535306138353337336564376232 Jan 20 01:33:58.682000 audit: BPF prog-id=152 op=LOAD Jan 20 01:33:58.682000 audit[3237]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3226 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138636531653238316134323734343535306138353337336564376232 Jan 20 01:33:58.682000 audit: BPF prog-id=152 op=UNLOAD Jan 20 01:33:58.682000 audit[3237]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3226 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138636531653238316134323734343535306138353337336564376232 Jan 20 01:33:58.682000 audit: BPF prog-id=151 op=UNLOAD Jan 20 01:33:58.682000 audit[3237]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3226 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138636531653238316134323734343535306138353337336564376232 Jan 20 01:33:58.682000 audit: BPF prog-id=153 op=LOAD Jan 20 01:33:58.682000 audit[3237]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3226 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138636531653238316134323734343535306138353337336564376232 Jan 20 01:33:58.758882 kubelet[2780]: E0120 01:33:58.758847 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:58.761666 containerd[1611]: time="2026-01-20T01:33:58.760299618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bpvml,Uid:cf71cb8f-cefd-486d-ac9b-fba425d4a7a3,Namespace:calico-system,Attempt:0,}" Jan 20 01:33:58.772571 kubelet[2780]: E0120 01:33:58.772441 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.773010 kubelet[2780]: W0120 01:33:58.772861 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.773010 kubelet[2780]: E0120 01:33:58.772892 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.773010 kubelet[2780]: I0120 01:33:58.772928 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/164d51f9-eed6-48ef-9188-a78d4106afb9-registration-dir\") pod \"csi-node-driver-phdz7\" (UID: \"164d51f9-eed6-48ef-9188-a78d4106afb9\") " pod="calico-system/csi-node-driver-phdz7" Jan 20 01:33:58.773406 kubelet[2780]: E0120 01:33:58.773379 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.773406 kubelet[2780]: W0120 01:33:58.773392 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.773604 kubelet[2780]: E0120 01:33:58.773546 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.773789 kubelet[2780]: I0120 01:33:58.773569 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/164d51f9-eed6-48ef-9188-a78d4106afb9-socket-dir\") pod \"csi-node-driver-phdz7\" (UID: \"164d51f9-eed6-48ef-9188-a78d4106afb9\") " pod="calico-system/csi-node-driver-phdz7" Jan 20 01:33:58.774205 kubelet[2780]: E0120 01:33:58.774151 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.774205 kubelet[2780]: W0120 01:33:58.774189 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.774326 kubelet[2780]: E0120 01:33:58.774313 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.774905 kubelet[2780]: E0120 01:33:58.774877 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.774905 kubelet[2780]: W0120 01:33:58.774890 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.775119 kubelet[2780]: E0120 01:33:58.775034 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.775300 kubelet[2780]: I0120 01:33:58.775277 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72vls\" (UniqueName: \"kubernetes.io/projected/164d51f9-eed6-48ef-9188-a78d4106afb9-kube-api-access-72vls\") pod \"csi-node-driver-phdz7\" (UID: \"164d51f9-eed6-48ef-9188-a78d4106afb9\") " pod="calico-system/csi-node-driver-phdz7" Jan 20 01:33:58.777016 kubelet[2780]: E0120 01:33:58.776976 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.777016 kubelet[2780]: W0120 01:33:58.776995 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.777325 kubelet[2780]: E0120 01:33:58.777196 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.777796 kubelet[2780]: E0120 01:33:58.777734 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.777796 kubelet[2780]: W0120 01:33:58.777764 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.777796 kubelet[2780]: E0120 01:33:58.777778 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.778883 kubelet[2780]: E0120 01:33:58.778859 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.778883 kubelet[2780]: W0120 01:33:58.778878 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.778963 kubelet[2780]: E0120 01:33:58.778896 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.779261 kubelet[2780]: E0120 01:33:58.779207 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.779261 kubelet[2780]: W0120 01:33:58.779240 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.779261 kubelet[2780]: E0120 01:33:58.779257 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.779839 kubelet[2780]: E0120 01:33:58.779814 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.779839 kubelet[2780]: W0120 01:33:58.779836 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.779911 kubelet[2780]: E0120 01:33:58.779892 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.780003 kubelet[2780]: I0120 01:33:58.779969 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/164d51f9-eed6-48ef-9188-a78d4106afb9-varrun\") pod \"csi-node-driver-phdz7\" (UID: \"164d51f9-eed6-48ef-9188-a78d4106afb9\") " pod="calico-system/csi-node-driver-phdz7" Jan 20 01:33:58.780433 kubelet[2780]: E0120 01:33:58.780364 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.780433 kubelet[2780]: W0120 01:33:58.780395 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.780433 kubelet[2780]: E0120 01:33:58.780410 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.780905 kubelet[2780]: E0120 01:33:58.780880 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.780959 kubelet[2780]: W0120 01:33:58.780909 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.781027 kubelet[2780]: E0120 01:33:58.780993 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.782158 kubelet[2780]: E0120 01:33:58.782026 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.782413 kubelet[2780]: W0120 01:33:58.782285 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.782638 kubelet[2780]: E0120 01:33:58.782506 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.782936 kubelet[2780]: E0120 01:33:58.782889 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.783044 kubelet[2780]: W0120 01:33:58.782933 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.783044 kubelet[2780]: E0120 01:33:58.782965 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.783044 kubelet[2780]: I0120 01:33:58.783007 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/164d51f9-eed6-48ef-9188-a78d4106afb9-kubelet-dir\") pod \"csi-node-driver-phdz7\" (UID: \"164d51f9-eed6-48ef-9188-a78d4106afb9\") " pod="calico-system/csi-node-driver-phdz7" Jan 20 01:33:58.783557 kubelet[2780]: E0120 01:33:58.783483 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.783557 kubelet[2780]: W0120 01:33:58.783497 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.783557 kubelet[2780]: E0120 01:33:58.783511 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.783934 kubelet[2780]: E0120 01:33:58.783848 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.783934 kubelet[2780]: W0120 01:33:58.783861 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.783934 kubelet[2780]: E0120 01:33:58.783873 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.785266 containerd[1611]: time="2026-01-20T01:33:58.785036824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c8f75c7bb-7f8k6,Uid:0ce76d48-7de9-41a1-a7cc-8d0ad60be698,Namespace:calico-system,Attempt:0,} returns sandbox id \"18ce1e281a42744550a85373ed7b252daff138ceeb45e1d424d249d5c78e2c1b\"" Jan 20 01:33:58.786209 kubelet[2780]: E0120 01:33:58.786063 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:58.788408 containerd[1611]: time="2026-01-20T01:33:58.788327990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 01:33:58.815472 containerd[1611]: time="2026-01-20T01:33:58.815397000Z" level=info msg="connecting to shim ddf5831de6e981e42f61570964c68ed5bfa9a858d6eb14cdae016f043b537bd8" address="unix:///run/containerd/s/196fe3f01dca3babb0681b1df9d4fc054e4f608ca54fc107b2007b5298f467d9" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:33:58.854379 systemd[1]: Started cri-containerd-ddf5831de6e981e42f61570964c68ed5bfa9a858d6eb14cdae016f043b537bd8.scope - libcontainer container ddf5831de6e981e42f61570964c68ed5bfa9a858d6eb14cdae016f043b537bd8. Jan 20 01:33:58.872000 audit: BPF prog-id=154 op=LOAD Jan 20 01:33:58.873000 audit: BPF prog-id=155 op=LOAD Jan 20 01:33:58.873000 audit[3323]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3312 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464663538333164653665393831653432663631353730393634633638 Jan 20 01:33:58.873000 audit: BPF prog-id=155 op=UNLOAD Jan 20 01:33:58.873000 audit[3323]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3312 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464663538333164653665393831653432663631353730393634633638 Jan 20 01:33:58.873000 audit: BPF prog-id=156 op=LOAD Jan 20 01:33:58.873000 audit[3323]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3312 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464663538333164653665393831653432663631353730393634633638 Jan 20 01:33:58.874000 audit: BPF prog-id=157 op=LOAD Jan 20 01:33:58.874000 audit[3323]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3312 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464663538333164653665393831653432663631353730393634633638 Jan 20 01:33:58.874000 audit: BPF prog-id=157 op=UNLOAD Jan 20 01:33:58.874000 audit[3323]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3312 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464663538333164653665393831653432663631353730393634633638 Jan 20 01:33:58.874000 audit: BPF prog-id=156 op=UNLOAD Jan 20 01:33:58.874000 audit[3323]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3312 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464663538333164653665393831653432663631353730393634633638 Jan 20 01:33:58.874000 audit: BPF prog-id=158 op=LOAD Jan 20 01:33:58.874000 audit[3323]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3312 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:33:58.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464663538333164653665393831653432663631353730393634633638 Jan 20 01:33:58.883975 kubelet[2780]: E0120 01:33:58.883945 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.884374 kubelet[2780]: W0120 01:33:58.883994 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.884374 kubelet[2780]: E0120 01:33:58.884023 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.884813 kubelet[2780]: E0120 01:33:58.884735 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.884813 kubelet[2780]: W0120 01:33:58.884751 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.884813 kubelet[2780]: E0120 01:33:58.884770 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.885901 kubelet[2780]: E0120 01:33:58.885861 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.885901 kubelet[2780]: W0120 01:33:58.885896 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.885973 kubelet[2780]: E0120 01:33:58.885959 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.886588 kubelet[2780]: E0120 01:33:58.886532 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.888414 kubelet[2780]: W0120 01:33:58.888158 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.888414 kubelet[2780]: E0120 01:33:58.888322 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.889295 kubelet[2780]: E0120 01:33:58.889279 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.889450 kubelet[2780]: W0120 01:33:58.889362 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.889662 kubelet[2780]: E0120 01:33:58.889563 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.890037 kubelet[2780]: E0120 01:33:58.889963 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.890187 kubelet[2780]: W0120 01:33:58.889979 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.890434 kubelet[2780]: E0120 01:33:58.890418 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.891454 kubelet[2780]: E0120 01:33:58.891337 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.891969 kubelet[2780]: W0120 01:33:58.891907 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.892731 kubelet[2780]: E0120 01:33:58.892417 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.893364 kubelet[2780]: E0120 01:33:58.893348 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.893532 kubelet[2780]: W0120 01:33:58.893477 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.893790 kubelet[2780]: E0120 01:33:58.893733 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.894667 kubelet[2780]: E0120 01:33:58.894514 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.895861 kubelet[2780]: W0120 01:33:58.894558 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.895861 kubelet[2780]: E0120 01:33:58.895245 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.896459 kubelet[2780]: E0120 01:33:58.896423 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.896459 kubelet[2780]: W0120 01:33:58.896440 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.896719 kubelet[2780]: E0120 01:33:58.896674 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.897214 kubelet[2780]: E0120 01:33:58.897183 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.897214 kubelet[2780]: W0120 01:33:58.897198 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.897510 kubelet[2780]: E0120 01:33:58.897492 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.898054 kubelet[2780]: E0120 01:33:58.898024 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.898054 kubelet[2780]: W0120 01:33:58.898038 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.898657 kubelet[2780]: E0120 01:33:58.898627 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.899060 kubelet[2780]: E0120 01:33:58.899029 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.899060 kubelet[2780]: W0120 01:33:58.899044 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.899355 kubelet[2780]: E0120 01:33:58.899340 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.899825 kubelet[2780]: E0120 01:33:58.899811 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.899897 kubelet[2780]: W0120 01:33:58.899884 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.900149 kubelet[2780]: E0120 01:33:58.900059 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.900559 kubelet[2780]: E0120 01:33:58.900545 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.900643 kubelet[2780]: W0120 01:33:58.900626 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.900803 kubelet[2780]: E0120 01:33:58.900788 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.901588 kubelet[2780]: E0120 01:33:58.901552 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.901588 kubelet[2780]: W0120 01:33:58.901568 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.901836 kubelet[2780]: E0120 01:33:58.901821 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.903930 kubelet[2780]: E0120 01:33:58.903897 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.903930 kubelet[2780]: W0120 01:33:58.903912 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.904192 kubelet[2780]: E0120 01:33:58.904169 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.905442 kubelet[2780]: E0120 01:33:58.905424 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.905569 kubelet[2780]: W0120 01:33:58.905517 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.905980 kubelet[2780]: E0120 01:33:58.905962 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.907551 kubelet[2780]: E0120 01:33:58.907304 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.907551 kubelet[2780]: W0120 01:33:58.907389 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.908501 kubelet[2780]: E0120 01:33:58.907898 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.908501 kubelet[2780]: E0120 01:33:58.907998 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.908501 kubelet[2780]: W0120 01:33:58.908176 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.909255 kubelet[2780]: E0120 01:33:58.908570 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.909255 kubelet[2780]: W0120 01:33:58.908582 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.909255 kubelet[2780]: E0120 01:33:58.909226 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.909373 kubelet[2780]: E0120 01:33:58.909261 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.909412 kubelet[2780]: E0120 01:33:58.909382 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.909412 kubelet[2780]: W0120 01:33:58.909392 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.909412 kubelet[2780]: E0120 01:33:58.909404 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.910038 kubelet[2780]: E0120 01:33:58.909998 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.910038 kubelet[2780]: W0120 01:33:58.910014 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.910038 kubelet[2780]: E0120 01:33:58.910030 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.910533 kubelet[2780]: E0120 01:33:58.910370 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.910533 kubelet[2780]: W0120 01:33:58.910408 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.910533 kubelet[2780]: E0120 01:33:58.910422 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.911046 kubelet[2780]: E0120 01:33:58.910790 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.911046 kubelet[2780]: W0120 01:33:58.910807 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.911046 kubelet[2780]: E0120 01:33:58.910822 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:33:58.914464 containerd[1611]: time="2026-01-20T01:33:58.913866001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bpvml,Uid:cf71cb8f-cefd-486d-ac9b-fba425d4a7a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"ddf5831de6e981e42f61570964c68ed5bfa9a858d6eb14cdae016f043b537bd8\"" Jan 20 01:33:58.915805 kubelet[2780]: E0120 01:33:58.915649 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:58.928370 kubelet[2780]: E0120 01:33:58.928326 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:33:58.928370 kubelet[2780]: W0120 01:33:58.928367 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:33:58.928566 kubelet[2780]: E0120 01:33:58.928392 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:00.222787 containerd[1611]: time="2026-01-20T01:34:00.222650890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:00.223810 containerd[1611]: time="2026-01-20T01:34:00.223731218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 20 01:34:00.224996 containerd[1611]: time="2026-01-20T01:34:00.224914396Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:00.227644 containerd[1611]: time="2026-01-20T01:34:00.227533079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:00.228280 containerd[1611]: time="2026-01-20T01:34:00.228215498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.439715426s" Jan 20 01:34:00.228280 containerd[1611]: time="2026-01-20T01:34:00.228265822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 01:34:00.229748 containerd[1611]: time="2026-01-20T01:34:00.229635565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 01:34:00.245658 containerd[1611]: time="2026-01-20T01:34:00.245588451Z" level=info msg="CreateContainer within sandbox \"18ce1e281a42744550a85373ed7b252daff138ceeb45e1d424d249d5c78e2c1b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 01:34:00.262183 containerd[1611]: time="2026-01-20T01:34:00.261376660Z" level=info msg="Container 81c35e8ea0e3c213180df98b33fd25b71c89c364dafb8b427b9f3ffcf496f862: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:34:00.272347 containerd[1611]: time="2026-01-20T01:34:00.272286952Z" level=info msg="CreateContainer within sandbox \"18ce1e281a42744550a85373ed7b252daff138ceeb45e1d424d249d5c78e2c1b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"81c35e8ea0e3c213180df98b33fd25b71c89c364dafb8b427b9f3ffcf496f862\"" Jan 20 01:34:00.276323 containerd[1611]: time="2026-01-20T01:34:00.276233331Z" level=info msg="StartContainer for \"81c35e8ea0e3c213180df98b33fd25b71c89c364dafb8b427b9f3ffcf496f862\"" Jan 20 01:34:00.278809 containerd[1611]: time="2026-01-20T01:34:00.278716102Z" level=info msg="connecting to shim 81c35e8ea0e3c213180df98b33fd25b71c89c364dafb8b427b9f3ffcf496f862" address="unix:///run/containerd/s/1452cf6da7e1fcb57e980515619623e16f307cf26459fc6820af71ea0a515987" protocol=ttrpc version=3 Jan 20 01:34:00.318520 systemd[1]: Started cri-containerd-81c35e8ea0e3c213180df98b33fd25b71c89c364dafb8b427b9f3ffcf496f862.scope - libcontainer container 81c35e8ea0e3c213180df98b33fd25b71c89c364dafb8b427b9f3ffcf496f862. Jan 20 01:34:00.344000 audit: BPF prog-id=159 op=LOAD Jan 20 01:34:00.345000 audit: BPF prog-id=160 op=LOAD Jan 20 01:34:00.345000 audit[3386]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3226 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:00.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633335653865613065336332313331383064663938623333666432 Jan 20 01:34:00.345000 audit: BPF prog-id=160 op=UNLOAD Jan 20 01:34:00.345000 audit[3386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3226 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:00.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633335653865613065336332313331383064663938623333666432 Jan 20 01:34:00.345000 audit: BPF prog-id=161 op=LOAD Jan 20 01:34:00.345000 audit[3386]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3226 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:00.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633335653865613065336332313331383064663938623333666432 Jan 20 01:34:00.345000 audit: BPF prog-id=162 op=LOAD Jan 20 01:34:00.345000 audit[3386]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3226 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:00.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633335653865613065336332313331383064663938623333666432 Jan 20 01:34:00.345000 audit: BPF prog-id=162 op=UNLOAD Jan 20 01:34:00.345000 audit[3386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3226 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:00.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633335653865613065336332313331383064663938623333666432 Jan 20 01:34:00.345000 audit: BPF prog-id=161 op=UNLOAD Jan 20 01:34:00.345000 audit[3386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3226 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:00.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633335653865613065336332313331383064663938623333666432 Jan 20 01:34:00.345000 audit: BPF prog-id=163 op=LOAD Jan 20 01:34:00.345000 audit[3386]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3226 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:00.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633335653865613065336332313331383064663938623333666432 Jan 20 01:34:00.402282 containerd[1611]: time="2026-01-20T01:34:00.401908324Z" level=info msg="StartContainer for \"81c35e8ea0e3c213180df98b33fd25b71c89c364dafb8b427b9f3ffcf496f862\" returns successfully" Jan 20 01:34:00.697502 kubelet[2780]: E0120 01:34:00.697418 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:34:01.113979 kubelet[2780]: E0120 01:34:01.113589 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:01.130866 kubelet[2780]: E0120 01:34:01.130781 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.130866 kubelet[2780]: W0120 01:34:01.130822 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.130866 kubelet[2780]: E0120 01:34:01.130851 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.131522 kubelet[2780]: E0120 01:34:01.131439 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.131522 kubelet[2780]: W0120 01:34:01.131461 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.131522 kubelet[2780]: E0120 01:34:01.131471 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.131821 kubelet[2780]: E0120 01:34:01.131790 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.131821 kubelet[2780]: W0120 01:34:01.131808 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.131821 kubelet[2780]: E0120 01:34:01.131816 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.132180 kubelet[2780]: E0120 01:34:01.132157 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.132180 kubelet[2780]: W0120 01:34:01.132171 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.132247 kubelet[2780]: E0120 01:34:01.132185 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.132578 kubelet[2780]: E0120 01:34:01.132528 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.132578 kubelet[2780]: W0120 01:34:01.132548 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.132578 kubelet[2780]: E0120 01:34:01.132561 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.132865 kubelet[2780]: E0120 01:34:01.132844 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.132865 kubelet[2780]: W0120 01:34:01.132855 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.132940 kubelet[2780]: E0120 01:34:01.132868 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.133424 kubelet[2780]: E0120 01:34:01.133366 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.133424 kubelet[2780]: W0120 01:34:01.133396 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.133424 kubelet[2780]: E0120 01:34:01.133410 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.133714 kubelet[2780]: E0120 01:34:01.133645 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.133714 kubelet[2780]: W0120 01:34:01.133676 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.133815 kubelet[2780]: E0120 01:34:01.133726 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.134043 kubelet[2780]: E0120 01:34:01.133987 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.134043 kubelet[2780]: W0120 01:34:01.134018 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.134043 kubelet[2780]: E0120 01:34:01.134032 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.134568 kubelet[2780]: E0120 01:34:01.134464 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.134568 kubelet[2780]: W0120 01:34:01.134492 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.134568 kubelet[2780]: E0120 01:34:01.134505 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.134981 kubelet[2780]: E0120 01:34:01.134952 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.134981 kubelet[2780]: W0120 01:34:01.134977 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.135136 kubelet[2780]: E0120 01:34:01.134989 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.135324 kubelet[2780]: E0120 01:34:01.135293 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.135324 kubelet[2780]: W0120 01:34:01.135321 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.135459 kubelet[2780]: E0120 01:34:01.135334 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.135808 kubelet[2780]: E0120 01:34:01.135753 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.135808 kubelet[2780]: W0120 01:34:01.135784 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.135808 kubelet[2780]: E0120 01:34:01.135798 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.136232 kubelet[2780]: E0120 01:34:01.136201 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.136232 kubelet[2780]: W0120 01:34:01.136228 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.136314 kubelet[2780]: E0120 01:34:01.136244 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.136534 kubelet[2780]: E0120 01:34:01.136510 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.136575 kubelet[2780]: W0120 01:34:01.136534 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.136575 kubelet[2780]: E0120 01:34:01.136546 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.215391 kubelet[2780]: E0120 01:34:01.215356 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.215881 kubelet[2780]: W0120 01:34:01.215599 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.215881 kubelet[2780]: E0120 01:34:01.215634 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.216183 kubelet[2780]: E0120 01:34:01.216150 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.216350 kubelet[2780]: W0120 01:34:01.216275 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.216506 kubelet[2780]: E0120 01:34:01.216420 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.216963 kubelet[2780]: E0120 01:34:01.216944 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.217141 kubelet[2780]: W0120 01:34:01.217034 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.217141 kubelet[2780]: E0120 01:34:01.217068 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.218012 kubelet[2780]: E0120 01:34:01.217952 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.218012 kubelet[2780]: W0120 01:34:01.217967 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.218524 kubelet[2780]: E0120 01:34:01.218501 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.218739 kubelet[2780]: E0120 01:34:01.218723 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.218896 kubelet[2780]: W0120 01:34:01.218872 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.219434 kubelet[2780]: E0120 01:34:01.219318 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.220390 kubelet[2780]: E0120 01:34:01.220159 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.220390 kubelet[2780]: W0120 01:34:01.220177 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.220390 kubelet[2780]: E0120 01:34:01.220306 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.220597 kubelet[2780]: E0120 01:34:01.220567 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.220597 kubelet[2780]: W0120 01:34:01.220589 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.220795 kubelet[2780]: E0120 01:34:01.220765 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.221115 kubelet[2780]: E0120 01:34:01.221056 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.221159 kubelet[2780]: W0120 01:34:01.221132 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.221201 kubelet[2780]: E0120 01:34:01.221183 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.221490 kubelet[2780]: E0120 01:34:01.221458 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.221490 kubelet[2780]: W0120 01:34:01.221486 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.221555 kubelet[2780]: E0120 01:34:01.221517 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.222336 kubelet[2780]: E0120 01:34:01.221961 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.222336 kubelet[2780]: W0120 01:34:01.221978 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.222336 kubelet[2780]: E0120 01:34:01.222029 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.222729 kubelet[2780]: E0120 01:34:01.222671 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.222766 kubelet[2780]: W0120 01:34:01.222734 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.222790 kubelet[2780]: E0120 01:34:01.222782 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.223443 kubelet[2780]: E0120 01:34:01.223414 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.223481 kubelet[2780]: W0120 01:34:01.223446 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.223481 kubelet[2780]: E0120 01:34:01.223465 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.223998 kubelet[2780]: E0120 01:34:01.223972 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.224031 kubelet[2780]: W0120 01:34:01.223999 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.224060 kubelet[2780]: E0120 01:34:01.224048 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.224792 kubelet[2780]: E0120 01:34:01.224748 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.224792 kubelet[2780]: W0120 01:34:01.224781 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.224879 kubelet[2780]: E0120 01:34:01.224848 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.225225 kubelet[2780]: E0120 01:34:01.225199 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.225265 kubelet[2780]: W0120 01:34:01.225227 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.225290 kubelet[2780]: E0120 01:34:01.225274 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.225714 kubelet[2780]: E0120 01:34:01.225649 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.225753 kubelet[2780]: W0120 01:34:01.225724 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.225780 kubelet[2780]: E0120 01:34:01.225754 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.226270 kubelet[2780]: E0120 01:34:01.226245 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.226309 kubelet[2780]: W0120 01:34:01.226270 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.226309 kubelet[2780]: E0120 01:34:01.226287 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.226918 kubelet[2780]: E0120 01:34:01.226815 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:34:01.226959 kubelet[2780]: W0120 01:34:01.226924 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:34:01.226959 kubelet[2780]: E0120 01:34:01.226937 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:34:01.258877 containerd[1611]: time="2026-01-20T01:34:01.258777841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:01.260036 containerd[1611]: time="2026-01-20T01:34:01.259987153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:01.261512 containerd[1611]: time="2026-01-20T01:34:01.261431333Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:01.265369 containerd[1611]: time="2026-01-20T01:34:01.264810879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:01.265721 containerd[1611]: time="2026-01-20T01:34:01.265638468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.035950806s" Jan 20 01:34:01.265761 containerd[1611]: time="2026-01-20T01:34:01.265732674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 01:34:01.268289 containerd[1611]: time="2026-01-20T01:34:01.268234898Z" level=info msg="CreateContainer within sandbox \"ddf5831de6e981e42f61570964c68ed5bfa9a858d6eb14cdae016f043b537bd8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 01:34:01.279571 containerd[1611]: time="2026-01-20T01:34:01.279500760Z" level=info msg="Container bbd2f86da3963ec2bc1b8d1f2eb9ba20793409ef781549a7668b2207ea5f5839: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:34:01.289972 containerd[1611]: time="2026-01-20T01:34:01.289899617Z" level=info msg="CreateContainer within sandbox \"ddf5831de6e981e42f61570964c68ed5bfa9a858d6eb14cdae016f043b537bd8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bbd2f86da3963ec2bc1b8d1f2eb9ba20793409ef781549a7668b2207ea5f5839\"" Jan 20 01:34:01.290880 containerd[1611]: time="2026-01-20T01:34:01.290806149Z" level=info msg="StartContainer for \"bbd2f86da3963ec2bc1b8d1f2eb9ba20793409ef781549a7668b2207ea5f5839\"" Jan 20 01:34:01.293068 containerd[1611]: time="2026-01-20T01:34:01.292984934Z" level=info msg="connecting to shim bbd2f86da3963ec2bc1b8d1f2eb9ba20793409ef781549a7668b2207ea5f5839" address="unix:///run/containerd/s/196fe3f01dca3babb0681b1df9d4fc054e4f608ca54fc107b2007b5298f467d9" protocol=ttrpc version=3 Jan 20 01:34:01.331298 systemd[1]: Started cri-containerd-bbd2f86da3963ec2bc1b8d1f2eb9ba20793409ef781549a7668b2207ea5f5839.scope - libcontainer container bbd2f86da3963ec2bc1b8d1f2eb9ba20793409ef781549a7668b2207ea5f5839. Jan 20 01:34:01.411000 audit: BPF prog-id=164 op=LOAD Jan 20 01:34:01.411000 audit[3464]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3312 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:01.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262643266383664613339363365633262633162386431663265623962 Jan 20 01:34:01.411000 audit: BPF prog-id=165 op=LOAD Jan 20 01:34:01.411000 audit[3464]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3312 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:01.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262643266383664613339363365633262633162386431663265623962 Jan 20 01:34:01.411000 audit: BPF prog-id=165 op=UNLOAD Jan 20 01:34:01.411000 audit[3464]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3312 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:01.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262643266383664613339363365633262633162386431663265623962 Jan 20 01:34:01.411000 audit: BPF prog-id=164 op=UNLOAD Jan 20 01:34:01.411000 audit[3464]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3312 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:01.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262643266383664613339363365633262633162386431663265623962 Jan 20 01:34:01.411000 audit: BPF prog-id=166 op=LOAD Jan 20 01:34:01.411000 audit[3464]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3312 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:01.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262643266383664613339363365633262633162386431663265623962 Jan 20 01:34:01.436013 containerd[1611]: time="2026-01-20T01:34:01.435943146Z" level=info msg="StartContainer for \"bbd2f86da3963ec2bc1b8d1f2eb9ba20793409ef781549a7668b2207ea5f5839\" returns successfully" Jan 20 01:34:01.455836 systemd[1]: cri-containerd-bbd2f86da3963ec2bc1b8d1f2eb9ba20793409ef781549a7668b2207ea5f5839.scope: Deactivated successfully. Jan 20 01:34:01.460834 containerd[1611]: time="2026-01-20T01:34:01.460750967Z" level=info msg="received container exit event container_id:\"bbd2f86da3963ec2bc1b8d1f2eb9ba20793409ef781549a7668b2207ea5f5839\" id:\"bbd2f86da3963ec2bc1b8d1f2eb9ba20793409ef781549a7668b2207ea5f5839\" pid:3477 exited_at:{seconds:1768872841 nanos:459725872}" Jan 20 01:34:01.461000 audit: BPF prog-id=166 op=UNLOAD Jan 20 01:34:01.495028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbd2f86da3963ec2bc1b8d1f2eb9ba20793409ef781549a7668b2207ea5f5839-rootfs.mount: Deactivated successfully. Jan 20 01:34:02.120031 kubelet[2780]: I0120 01:34:02.119946 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:34:02.121444 kubelet[2780]: E0120 01:34:02.121003 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:02.121444 kubelet[2780]: E0120 01:34:02.121234 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:02.123192 containerd[1611]: time="2026-01-20T01:34:02.123063521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 01:34:02.142531 kubelet[2780]: I0120 01:34:02.142429 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c8f75c7bb-7f8k6" podStartSLOduration=2.700219044 podStartE2EDuration="4.142410664s" podCreationTimestamp="2026-01-20 01:33:58 +0000 UTC" firstStartedPulling="2026-01-20 01:33:58.787256271 +0000 UTC m=+21.231767069" lastFinishedPulling="2026-01-20 01:34:00.229447881 +0000 UTC m=+22.673958689" observedRunningTime="2026-01-20 01:34:01.130960834 +0000 UTC m=+23.575471642" watchObservedRunningTime="2026-01-20 01:34:02.142410664 +0000 UTC m=+24.586921462" Jan 20 01:34:02.698138 kubelet[2780]: E0120 01:34:02.697766 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:34:04.261331 containerd[1611]: time="2026-01-20T01:34:04.261176936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:04.266859 containerd[1611]: time="2026-01-20T01:34:04.266721463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 20 01:34:04.270973 containerd[1611]: time="2026-01-20T01:34:04.268639333Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:04.299907 containerd[1611]: time="2026-01-20T01:34:04.299667509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:04.301020 containerd[1611]: time="2026-01-20T01:34:04.300848664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.177684224s" Jan 20 01:34:04.301020 containerd[1611]: time="2026-01-20T01:34:04.300906272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 01:34:04.306010 containerd[1611]: time="2026-01-20T01:34:04.305588861Z" level=info msg="CreateContainer within sandbox \"ddf5831de6e981e42f61570964c68ed5bfa9a858d6eb14cdae016f043b537bd8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 01:34:04.332730 containerd[1611]: time="2026-01-20T01:34:04.332593349Z" level=info msg="Container 251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:34:04.366845 containerd[1611]: time="2026-01-20T01:34:04.362822030Z" level=info msg="CreateContainer within sandbox \"ddf5831de6e981e42f61570964c68ed5bfa9a858d6eb14cdae016f043b537bd8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84\"" Jan 20 01:34:04.367243 containerd[1611]: time="2026-01-20T01:34:04.367218297Z" level=info msg="StartContainer for \"251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84\"" Jan 20 01:34:04.369339 containerd[1611]: time="2026-01-20T01:34:04.369275515Z" level=info msg="connecting to shim 251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84" address="unix:///run/containerd/s/196fe3f01dca3babb0681b1df9d4fc054e4f608ca54fc107b2007b5298f467d9" protocol=ttrpc version=3 Jan 20 01:34:04.431533 systemd[1]: Started cri-containerd-251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84.scope - libcontainer container 251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84. Jan 20 01:34:04.550769 kernel: kauditd_printk_skb: 78 callbacks suppressed Jan 20 01:34:04.550931 kernel: audit: type=1334 audit(1768872844.545:545): prog-id=167 op=LOAD Jan 20 01:34:04.545000 audit: BPF prog-id=167 op=LOAD Jan 20 01:34:04.545000 audit[3522]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3312 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:04.577768 kernel: audit: type=1300 audit(1768872844.545:545): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3312 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:04.577893 kernel: audit: type=1327 audit(1768872844.545:545): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235313633306330376162326562316536386464386163643461336364 Jan 20 01:34:04.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235313633306330376162326562316536386464386163643461336364 Jan 20 01:34:04.590400 kernel: audit: type=1334 audit(1768872844.545:546): prog-id=168 op=LOAD Jan 20 01:34:04.545000 audit: BPF prog-id=168 op=LOAD Jan 20 01:34:04.545000 audit[3522]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3312 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:04.613398 kernel: audit: type=1300 audit(1768872844.545:546): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3312 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:04.613540 kernel: audit: type=1327 audit(1768872844.545:546): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235313633306330376162326562316536386464386163643461336364 Jan 20 01:34:04.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235313633306330376162326562316536386464386163643461336364 Jan 20 01:34:04.545000 audit: BPF prog-id=168 op=UNLOAD Jan 20 01:34:04.637298 kernel: audit: type=1334 audit(1768872844.545:547): prog-id=168 op=UNLOAD Jan 20 01:34:04.637423 kernel: audit: type=1300 audit(1768872844.545:547): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3312 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:04.545000 audit[3522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3312 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:04.653831 containerd[1611]: time="2026-01-20T01:34:04.653754601Z" level=info msg="StartContainer for \"251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84\" returns successfully" Jan 20 01:34:04.669179 kernel: audit: type=1327 audit(1768872844.545:547): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235313633306330376162326562316536386464386163643461336364 Jan 20 01:34:04.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235313633306330376162326562316536386464386163643461336364 Jan 20 01:34:04.545000 audit: BPF prog-id=167 op=UNLOAD Jan 20 01:34:04.675319 kernel: audit: type=1334 audit(1768872844.545:548): prog-id=167 op=UNLOAD Jan 20 01:34:04.545000 audit[3522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3312 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:04.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235313633306330376162326562316536386464386163643461336364 Jan 20 01:34:04.545000 audit: BPF prog-id=169 op=LOAD Jan 20 01:34:04.545000 audit[3522]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3312 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:04.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235313633306330376162326562316536386464386163643461336364 Jan 20 01:34:04.699011 kubelet[2780]: E0120 01:34:04.698361 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:34:05.135516 kubelet[2780]: E0120 01:34:05.133822 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:05.699623 systemd[1]: cri-containerd-251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84.scope: Deactivated successfully. Jan 20 01:34:05.700184 systemd[1]: cri-containerd-251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84.scope: Consumed 907ms CPU time, 177.7M memory peak, 3.8M read from disk, 171.3M written to disk. Jan 20 01:34:05.704000 audit: BPF prog-id=169 op=UNLOAD Jan 20 01:34:05.706158 containerd[1611]: time="2026-01-20T01:34:05.705965944Z" level=info msg="received container exit event container_id:\"251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84\" id:\"251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84\" pid:3536 exited_at:{seconds:1768872845 nanos:702217343}" Jan 20 01:34:05.742348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-251630c07ab2eb1e68dd8acd4a3cd4a2fab348b09cd8f8f9dd83d37fef17ac84-rootfs.mount: Deactivated successfully. Jan 20 01:34:05.775381 kubelet[2780]: I0120 01:34:05.775256 2780 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 01:34:05.843943 systemd[1]: Created slice kubepods-besteffort-pode535c75b_4142_4085_8d9d_2841894e5fe8.slice - libcontainer container kubepods-besteffort-pode535c75b_4142_4085_8d9d_2841894e5fe8.slice. Jan 20 01:34:05.857006 systemd[1]: Created slice kubepods-besteffort-poda2ca7657_636b_49d1_99cd_fdd7e6e260be.slice - libcontainer container kubepods-besteffort-poda2ca7657_636b_49d1_99cd_fdd7e6e260be.slice. Jan 20 01:34:05.866047 systemd[1]: Created slice kubepods-besteffort-podd9baf707_371f_47e4_9f67_1785bd6ba68b.slice - libcontainer container kubepods-besteffort-podd9baf707_371f_47e4_9f67_1785bd6ba68b.slice. Jan 20 01:34:05.876547 systemd[1]: Created slice kubepods-burstable-pod7c57aba5_9af6_45bb_832d_1152db895836.slice - libcontainer container kubepods-burstable-pod7c57aba5_9af6_45bb_832d_1152db895836.slice. Jan 20 01:34:05.886783 systemd[1]: Created slice kubepods-besteffort-podc9a4e181_6c6f_4f81_9d5f_8631eccf6c7d.slice - libcontainer container kubepods-besteffort-podc9a4e181_6c6f_4f81_9d5f_8631eccf6c7d.slice. Jan 20 01:34:05.893962 systemd[1]: Created slice kubepods-besteffort-podc55441d4_7803_4009_82ca_ee9ec6a88be8.slice - libcontainer container kubepods-besteffort-podc55441d4_7803_4009_82ca_ee9ec6a88be8.slice. Jan 20 01:34:05.903353 systemd[1]: Created slice kubepods-besteffort-pod93c423b9_f734_475b_aea9_f003af7097a2.slice - libcontainer container kubepods-besteffort-pod93c423b9_f734_475b_aea9_f003af7097a2.slice. Jan 20 01:34:05.910064 systemd[1]: Created slice kubepods-burstable-podaab500f2_8a22_416d_a3ca_7e80812d5776.slice - libcontainer container kubepods-burstable-podaab500f2_8a22_416d_a3ca_7e80812d5776.slice. Jan 20 01:34:05.962742 kubelet[2780]: I0120 01:34:05.962199 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ffsq\" (UniqueName: \"kubernetes.io/projected/e535c75b-4142-4085-8d9d-2841894e5fe8-kube-api-access-7ffsq\") pod \"calico-kube-controllers-947d9dcc-bp5fh\" (UID: \"e535c75b-4142-4085-8d9d-2841894e5fe8\") " pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" Jan 20 01:34:05.962742 kubelet[2780]: I0120 01:34:05.962242 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93c423b9-f734-475b-aea9-f003af7097a2-goldmane-ca-bundle\") pod \"goldmane-666569f655-rs9sl\" (UID: \"93c423b9-f734-475b-aea9-f003af7097a2\") " pod="calico-system/goldmane-666569f655-rs9sl" Jan 20 01:34:05.962742 kubelet[2780]: I0120 01:34:05.962259 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxjbr\" (UniqueName: \"kubernetes.io/projected/7c57aba5-9af6-45bb-832d-1152db895836-kube-api-access-gxjbr\") pod \"coredns-668d6bf9bc-5zczl\" (UID: \"7c57aba5-9af6-45bb-832d-1152db895836\") " pod="kube-system/coredns-668d6bf9bc-5zczl" Jan 20 01:34:05.962742 kubelet[2780]: I0120 01:34:05.962276 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf2km\" (UniqueName: \"kubernetes.io/projected/d9baf707-371f-47e4-9f67-1785bd6ba68b-kube-api-access-mf2km\") pod \"calico-apiserver-dd7bff465-4rkgx\" (UID: \"d9baf707-371f-47e4-9f67-1785bd6ba68b\") " pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" Jan 20 01:34:05.962742 kubelet[2780]: I0120 01:34:05.962290 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwd9\" (UniqueName: \"kubernetes.io/projected/93c423b9-f734-475b-aea9-f003af7097a2-kube-api-access-tmwd9\") pod \"goldmane-666569f655-rs9sl\" (UID: \"93c423b9-f734-475b-aea9-f003af7097a2\") " pod="calico-system/goldmane-666569f655-rs9sl" Jan 20 01:34:05.962956 kubelet[2780]: I0120 01:34:05.962305 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79rhp\" (UniqueName: \"kubernetes.io/projected/c55441d4-7803-4009-82ca-ee9ec6a88be8-kube-api-access-79rhp\") pod \"calico-apiserver-7c8dd7d667-prz7k\" (UID: \"c55441d4-7803-4009-82ca-ee9ec6a88be8\") " pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" Jan 20 01:34:05.962956 kubelet[2780]: I0120 01:34:05.962319 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2ca7657-636b-49d1-99cd-fdd7e6e260be-whisker-ca-bundle\") pod \"whisker-58f64dd867-nwz5n\" (UID: \"a2ca7657-636b-49d1-99cd-fdd7e6e260be\") " pod="calico-system/whisker-58f64dd867-nwz5n" Jan 20 01:34:05.962956 kubelet[2780]: I0120 01:34:05.962336 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c57aba5-9af6-45bb-832d-1152db895836-config-volume\") pod \"coredns-668d6bf9bc-5zczl\" (UID: \"7c57aba5-9af6-45bb-832d-1152db895836\") " pod="kube-system/coredns-668d6bf9bc-5zczl" Jan 20 01:34:05.962956 kubelet[2780]: I0120 01:34:05.962353 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c55441d4-7803-4009-82ca-ee9ec6a88be8-calico-apiserver-certs\") pod \"calico-apiserver-7c8dd7d667-prz7k\" (UID: \"c55441d4-7803-4009-82ca-ee9ec6a88be8\") " pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" Jan 20 01:34:05.962956 kubelet[2780]: I0120 01:34:05.962366 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4wfz\" (UniqueName: \"kubernetes.io/projected/a2ca7657-636b-49d1-99cd-fdd7e6e260be-kube-api-access-h4wfz\") pod \"whisker-58f64dd867-nwz5n\" (UID: \"a2ca7657-636b-49d1-99cd-fdd7e6e260be\") " pod="calico-system/whisker-58f64dd867-nwz5n" Jan 20 01:34:05.963416 kubelet[2780]: I0120 01:34:05.962380 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9baf707-371f-47e4-9f67-1785bd6ba68b-calico-apiserver-certs\") pod \"calico-apiserver-dd7bff465-4rkgx\" (UID: \"d9baf707-371f-47e4-9f67-1785bd6ba68b\") " pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" Jan 20 01:34:05.963416 kubelet[2780]: I0120 01:34:05.962398 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d-calico-apiserver-certs\") pod \"calico-apiserver-7c8dd7d667-ct8ff\" (UID: \"c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d\") " pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" Jan 20 01:34:05.963416 kubelet[2780]: I0120 01:34:05.962411 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvw5\" (UniqueName: \"kubernetes.io/projected/c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d-kube-api-access-hzvw5\") pod \"calico-apiserver-7c8dd7d667-ct8ff\" (UID: \"c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d\") " pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" Jan 20 01:34:05.963416 kubelet[2780]: I0120 01:34:05.962424 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/93c423b9-f734-475b-aea9-f003af7097a2-goldmane-key-pair\") pod \"goldmane-666569f655-rs9sl\" (UID: \"93c423b9-f734-475b-aea9-f003af7097a2\") " pod="calico-system/goldmane-666569f655-rs9sl" Jan 20 01:34:05.963416 kubelet[2780]: I0120 01:34:05.962438 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e535c75b-4142-4085-8d9d-2841894e5fe8-tigera-ca-bundle\") pod \"calico-kube-controllers-947d9dcc-bp5fh\" (UID: \"e535c75b-4142-4085-8d9d-2841894e5fe8\") " pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" Jan 20 01:34:05.963587 kubelet[2780]: I0120 01:34:05.962453 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vwks\" (UniqueName: \"kubernetes.io/projected/aab500f2-8a22-416d-a3ca-7e80812d5776-kube-api-access-2vwks\") pod \"coredns-668d6bf9bc-jw8rq\" (UID: \"aab500f2-8a22-416d-a3ca-7e80812d5776\") " pod="kube-system/coredns-668d6bf9bc-jw8rq" Jan 20 01:34:05.963587 kubelet[2780]: I0120 01:34:05.962466 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c423b9-f734-475b-aea9-f003af7097a2-config\") pod \"goldmane-666569f655-rs9sl\" (UID: \"93c423b9-f734-475b-aea9-f003af7097a2\") " pod="calico-system/goldmane-666569f655-rs9sl" Jan 20 01:34:05.963587 kubelet[2780]: I0120 01:34:05.962483 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aab500f2-8a22-416d-a3ca-7e80812d5776-config-volume\") pod \"coredns-668d6bf9bc-jw8rq\" (UID: \"aab500f2-8a22-416d-a3ca-7e80812d5776\") " pod="kube-system/coredns-668d6bf9bc-jw8rq" Jan 20 01:34:05.963587 kubelet[2780]: I0120 01:34:05.962496 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2ca7657-636b-49d1-99cd-fdd7e6e260be-whisker-backend-key-pair\") pod \"whisker-58f64dd867-nwz5n\" (UID: \"a2ca7657-636b-49d1-99cd-fdd7e6e260be\") " pod="calico-system/whisker-58f64dd867-nwz5n" Jan 20 01:34:06.003938 kubelet[2780]: I0120 01:34:06.003855 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:34:06.004575 kubelet[2780]: E0120 01:34:06.004445 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:06.054000 audit[3567]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:06.054000 audit[3567]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffebe07fda0 a2=0 a3=7ffebe07fd8c items=0 ppid=2936 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:06.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:06.067000 audit[3567]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:06.067000 audit[3567]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffebe07fda0 a2=0 a3=7ffebe07fd8c items=0 ppid=2936 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:06.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:06.140265 kubelet[2780]: E0120 01:34:06.140183 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:06.140683 kubelet[2780]: E0120 01:34:06.140653 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:06.142048 containerd[1611]: time="2026-01-20T01:34:06.141990697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 01:34:06.150772 containerd[1611]: time="2026-01-20T01:34:06.150375565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-947d9dcc-bp5fh,Uid:e535c75b-4142-4085-8d9d-2841894e5fe8,Namespace:calico-system,Attempt:0,}" Jan 20 01:34:06.167996 containerd[1611]: time="2026-01-20T01:34:06.167915047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58f64dd867-nwz5n,Uid:a2ca7657-636b-49d1-99cd-fdd7e6e260be,Namespace:calico-system,Attempt:0,}" Jan 20 01:34:06.170991 containerd[1611]: time="2026-01-20T01:34:06.170925708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd7bff465-4rkgx,Uid:d9baf707-371f-47e4-9f67-1785bd6ba68b,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:34:06.184609 kubelet[2780]: E0120 01:34:06.184369 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:06.186254 containerd[1611]: time="2026-01-20T01:34:06.186214042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5zczl,Uid:7c57aba5-9af6-45bb-832d-1152db895836,Namespace:kube-system,Attempt:0,}" Jan 20 01:34:06.192668 containerd[1611]: time="2026-01-20T01:34:06.192502868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8dd7d667-ct8ff,Uid:c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:34:06.200375 containerd[1611]: time="2026-01-20T01:34:06.200194274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8dd7d667-prz7k,Uid:c55441d4-7803-4009-82ca-ee9ec6a88be8,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:34:06.209800 containerd[1611]: time="2026-01-20T01:34:06.209645763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rs9sl,Uid:93c423b9-f734-475b-aea9-f003af7097a2,Namespace:calico-system,Attempt:0,}" Jan 20 01:34:06.214281 kubelet[2780]: E0120 01:34:06.213370 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:06.219517 containerd[1611]: time="2026-01-20T01:34:06.219429783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw8rq,Uid:aab500f2-8a22-416d-a3ca-7e80812d5776,Namespace:kube-system,Attempt:0,}" Jan 20 01:34:06.381017 containerd[1611]: time="2026-01-20T01:34:06.380843458Z" level=error msg="Failed to destroy network for sandbox \"75fe74c5e1883fe5999bfb5afcb0657ccdd76bb77595af66a2770c920642f900\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.403958 containerd[1611]: time="2026-01-20T01:34:06.403600581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-947d9dcc-bp5fh,Uid:e535c75b-4142-4085-8d9d-2841894e5fe8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75fe74c5e1883fe5999bfb5afcb0657ccdd76bb77595af66a2770c920642f900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.405139 kubelet[2780]: E0120 01:34:06.404949 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75fe74c5e1883fe5999bfb5afcb0657ccdd76bb77595af66a2770c920642f900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.405530 kubelet[2780]: E0120 01:34:06.405388 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75fe74c5e1883fe5999bfb5afcb0657ccdd76bb77595af66a2770c920642f900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" Jan 20 01:34:06.405530 kubelet[2780]: E0120 01:34:06.405476 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75fe74c5e1883fe5999bfb5afcb0657ccdd76bb77595af66a2770c920642f900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" Jan 20 01:34:06.406411 kubelet[2780]: E0120 01:34:06.406305 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-947d9dcc-bp5fh_calico-system(e535c75b-4142-4085-8d9d-2841894e5fe8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-947d9dcc-bp5fh_calico-system(e535c75b-4142-4085-8d9d-2841894e5fe8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75fe74c5e1883fe5999bfb5afcb0657ccdd76bb77595af66a2770c920642f900\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:34:06.447316 containerd[1611]: time="2026-01-20T01:34:06.447227443Z" level=error msg="Failed to destroy network for sandbox \"ea4b3b2ffe5b12873073ddf41fd6426672572d27ab055f574b31b78c8d2036d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.456539 containerd[1611]: time="2026-01-20T01:34:06.453644652Z" level=error msg="Failed to destroy network for sandbox \"ac4d961734680366c5482cc1b35cf8dcb331d628e68510a2d9e7b1ead1d62b6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.456892 containerd[1611]: time="2026-01-20T01:34:06.456809780Z" level=error msg="Failed to destroy network for sandbox \"19d804badc3ff02d3e7505a21ce13e7e2f77e5516ea80f7d08731a8a34a6dccd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.458778 containerd[1611]: time="2026-01-20T01:34:06.458726557Z" level=error msg="Failed to destroy network for sandbox \"79c64c55cf540f3237e692cd22e4fe4e6dd029f91221a565bfbdcbc245ef166e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.461013 containerd[1611]: time="2026-01-20T01:34:06.460825284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rs9sl,Uid:93c423b9-f734-475b-aea9-f003af7097a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4b3b2ffe5b12873073ddf41fd6426672572d27ab055f574b31b78c8d2036d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.461356 kubelet[2780]: E0120 01:34:06.461296 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4b3b2ffe5b12873073ddf41fd6426672572d27ab055f574b31b78c8d2036d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.461431 kubelet[2780]: E0120 01:34:06.461377 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4b3b2ffe5b12873073ddf41fd6426672572d27ab055f574b31b78c8d2036d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rs9sl" Jan 20 01:34:06.461431 kubelet[2780]: E0120 01:34:06.461406 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4b3b2ffe5b12873073ddf41fd6426672572d27ab055f574b31b78c8d2036d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rs9sl" Jan 20 01:34:06.461526 kubelet[2780]: E0120 01:34:06.461455 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-rs9sl_calico-system(93c423b9-f734-475b-aea9-f003af7097a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-rs9sl_calico-system(93c423b9-f734-475b-aea9-f003af7097a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea4b3b2ffe5b12873073ddf41fd6426672572d27ab055f574b31b78c8d2036d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:34:06.466815 containerd[1611]: time="2026-01-20T01:34:06.466396662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58f64dd867-nwz5n,Uid:a2ca7657-636b-49d1-99cd-fdd7e6e260be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c64c55cf540f3237e692cd22e4fe4e6dd029f91221a565bfbdcbc245ef166e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.468038 kubelet[2780]: E0120 01:34:06.467187 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c64c55cf540f3237e692cd22e4fe4e6dd029f91221a565bfbdcbc245ef166e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.468756 kubelet[2780]: E0120 01:34:06.468545 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c64c55cf540f3237e692cd22e4fe4e6dd029f91221a565bfbdcbc245ef166e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58f64dd867-nwz5n" Jan 20 01:34:06.468756 kubelet[2780]: E0120 01:34:06.468678 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c64c55cf540f3237e692cd22e4fe4e6dd029f91221a565bfbdcbc245ef166e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58f64dd867-nwz5n" Jan 20 01:34:06.469056 kubelet[2780]: E0120 01:34:06.468782 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58f64dd867-nwz5n_calico-system(a2ca7657-636b-49d1-99cd-fdd7e6e260be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58f64dd867-nwz5n_calico-system(a2ca7657-636b-49d1-99cd-fdd7e6e260be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79c64c55cf540f3237e692cd22e4fe4e6dd029f91221a565bfbdcbc245ef166e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58f64dd867-nwz5n" podUID="a2ca7657-636b-49d1-99cd-fdd7e6e260be" Jan 20 01:34:06.480485 containerd[1611]: time="2026-01-20T01:34:06.480382299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5zczl,Uid:7c57aba5-9af6-45bb-832d-1152db895836,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4d961734680366c5482cc1b35cf8dcb331d628e68510a2d9e7b1ead1d62b6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.481303 kubelet[2780]: E0120 01:34:06.481173 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4d961734680366c5482cc1b35cf8dcb331d628e68510a2d9e7b1ead1d62b6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.481303 kubelet[2780]: E0120 01:34:06.481255 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4d961734680366c5482cc1b35cf8dcb331d628e68510a2d9e7b1ead1d62b6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5zczl" Jan 20 01:34:06.481303 kubelet[2780]: E0120 01:34:06.481279 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4d961734680366c5482cc1b35cf8dcb331d628e68510a2d9e7b1ead1d62b6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5zczl" Jan 20 01:34:06.481445 kubelet[2780]: E0120 01:34:06.481334 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5zczl_kube-system(7c57aba5-9af6-45bb-832d-1152db895836)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5zczl_kube-system(7c57aba5-9af6-45bb-832d-1152db895836)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac4d961734680366c5482cc1b35cf8dcb331d628e68510a2d9e7b1ead1d62b6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5zczl" podUID="7c57aba5-9af6-45bb-832d-1152db895836" Jan 20 01:34:06.482552 containerd[1611]: time="2026-01-20T01:34:06.482345953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd7bff465-4rkgx,Uid:d9baf707-371f-47e4-9f67-1785bd6ba68b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19d804badc3ff02d3e7505a21ce13e7e2f77e5516ea80f7d08731a8a34a6dccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.482765 kubelet[2780]: E0120 01:34:06.482583 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19d804badc3ff02d3e7505a21ce13e7e2f77e5516ea80f7d08731a8a34a6dccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.482765 kubelet[2780]: E0120 01:34:06.482627 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19d804badc3ff02d3e7505a21ce13e7e2f77e5516ea80f7d08731a8a34a6dccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" Jan 20 01:34:06.482765 kubelet[2780]: E0120 01:34:06.482650 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19d804badc3ff02d3e7505a21ce13e7e2f77e5516ea80f7d08731a8a34a6dccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" Jan 20 01:34:06.482936 kubelet[2780]: E0120 01:34:06.482722 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd7bff465-4rkgx_calico-apiserver(d9baf707-371f-47e4-9f67-1785bd6ba68b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd7bff465-4rkgx_calico-apiserver(d9baf707-371f-47e4-9f67-1785bd6ba68b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19d804badc3ff02d3e7505a21ce13e7e2f77e5516ea80f7d08731a8a34a6dccd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:34:06.507728 containerd[1611]: time="2026-01-20T01:34:06.507502623Z" level=error msg="Failed to destroy network for sandbox \"520fd09f07b5a1a1290f2ec8efce14bbe5ea8389d581a868fbb55a15b142d61b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.514416 containerd[1611]: time="2026-01-20T01:34:06.514298545Z" level=error msg="Failed to destroy network for sandbox \"dc3637a5733264a645c15b41e66fb900896ce22dd85992b77a6065187cb92064\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.522363 containerd[1611]: time="2026-01-20T01:34:06.521834757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8dd7d667-ct8ff,Uid:c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"520fd09f07b5a1a1290f2ec8efce14bbe5ea8389d581a868fbb55a15b142d61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.522363 containerd[1611]: time="2026-01-20T01:34:06.522158532Z" level=error msg="Failed to destroy network for sandbox \"d07f8ebae8134267e86cebfc895473ed3347553dcf050f14d4edd6d4dccd83ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.522849 kubelet[2780]: E0120 01:34:06.522781 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520fd09f07b5a1a1290f2ec8efce14bbe5ea8389d581a868fbb55a15b142d61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.522921 kubelet[2780]: E0120 01:34:06.522859 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520fd09f07b5a1a1290f2ec8efce14bbe5ea8389d581a868fbb55a15b142d61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" Jan 20 01:34:06.522921 kubelet[2780]: E0120 01:34:06.522883 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520fd09f07b5a1a1290f2ec8efce14bbe5ea8389d581a868fbb55a15b142d61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" Jan 20 01:34:06.523013 kubelet[2780]: E0120 01:34:06.522947 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c8dd7d667-ct8ff_calico-apiserver(c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c8dd7d667-ct8ff_calico-apiserver(c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"520fd09f07b5a1a1290f2ec8efce14bbe5ea8389d581a868fbb55a15b142d61b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:34:06.528820 containerd[1611]: time="2026-01-20T01:34:06.527889317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8dd7d667-prz7k,Uid:c55441d4-7803-4009-82ca-ee9ec6a88be8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d07f8ebae8134267e86cebfc895473ed3347553dcf050f14d4edd6d4dccd83ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.529059 kubelet[2780]: E0120 01:34:06.528449 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d07f8ebae8134267e86cebfc895473ed3347553dcf050f14d4edd6d4dccd83ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.529059 kubelet[2780]: E0120 01:34:06.528526 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d07f8ebae8134267e86cebfc895473ed3347553dcf050f14d4edd6d4dccd83ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" Jan 20 01:34:06.529059 kubelet[2780]: E0120 01:34:06.528553 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d07f8ebae8134267e86cebfc895473ed3347553dcf050f14d4edd6d4dccd83ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" Jan 20 01:34:06.529264 kubelet[2780]: E0120 01:34:06.528602 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c8dd7d667-prz7k_calico-apiserver(c55441d4-7803-4009-82ca-ee9ec6a88be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c8dd7d667-prz7k_calico-apiserver(c55441d4-7803-4009-82ca-ee9ec6a88be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d07f8ebae8134267e86cebfc895473ed3347553dcf050f14d4edd6d4dccd83ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:34:06.537579 containerd[1611]: time="2026-01-20T01:34:06.537451870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw8rq,Uid:aab500f2-8a22-416d-a3ca-7e80812d5776,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc3637a5733264a645c15b41e66fb900896ce22dd85992b77a6065187cb92064\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.538165 kubelet[2780]: E0120 01:34:06.538010 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc3637a5733264a645c15b41e66fb900896ce22dd85992b77a6065187cb92064\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.538260 kubelet[2780]: E0120 01:34:06.538191 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc3637a5733264a645c15b41e66fb900896ce22dd85992b77a6065187cb92064\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jw8rq" Jan 20 01:34:06.538260 kubelet[2780]: E0120 01:34:06.538220 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc3637a5733264a645c15b41e66fb900896ce22dd85992b77a6065187cb92064\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jw8rq" Jan 20 01:34:06.538500 kubelet[2780]: E0120 01:34:06.538377 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jw8rq_kube-system(aab500f2-8a22-416d-a3ca-7e80812d5776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jw8rq_kube-system(aab500f2-8a22-416d-a3ca-7e80812d5776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc3637a5733264a645c15b41e66fb900896ce22dd85992b77a6065187cb92064\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jw8rq" podUID="aab500f2-8a22-416d-a3ca-7e80812d5776" Jan 20 01:34:06.706259 systemd[1]: Created slice kubepods-besteffort-pod164d51f9_eed6_48ef_9188_a78d4106afb9.slice - libcontainer container kubepods-besteffort-pod164d51f9_eed6_48ef_9188_a78d4106afb9.slice. Jan 20 01:34:06.709608 containerd[1611]: time="2026-01-20T01:34:06.709514317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phdz7,Uid:164d51f9-eed6-48ef-9188-a78d4106afb9,Namespace:calico-system,Attempt:0,}" Jan 20 01:34:06.784004 containerd[1611]: time="2026-01-20T01:34:06.783610079Z" level=error msg="Failed to destroy network for sandbox \"d91656f1429bcc611590c666b0edf5aed51f8f74f18c17be4d6dfa976ee5aedd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.788569 systemd[1]: run-netns-cni\x2daef3f97a\x2d320f\x2d974c\x2df410\x2d252eedfaffa9.mount: Deactivated successfully. Jan 20 01:34:06.791800 containerd[1611]: time="2026-01-20T01:34:06.791729366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phdz7,Uid:164d51f9-eed6-48ef-9188-a78d4106afb9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d91656f1429bcc611590c666b0edf5aed51f8f74f18c17be4d6dfa976ee5aedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.792400 kubelet[2780]: E0120 01:34:06.792342 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d91656f1429bcc611590c666b0edf5aed51f8f74f18c17be4d6dfa976ee5aedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:34:06.792933 kubelet[2780]: E0120 01:34:06.792427 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d91656f1429bcc611590c666b0edf5aed51f8f74f18c17be4d6dfa976ee5aedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-phdz7" Jan 20 01:34:06.792933 kubelet[2780]: E0120 01:34:06.792517 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d91656f1429bcc611590c666b0edf5aed51f8f74f18c17be4d6dfa976ee5aedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-phdz7" Jan 20 01:34:06.792933 kubelet[2780]: E0120 01:34:06.792791 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d91656f1429bcc611590c666b0edf5aed51f8f74f18c17be4d6dfa976ee5aedd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:34:14.915639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3448281905.mount: Deactivated successfully. Jan 20 01:34:15.213002 containerd[1611]: time="2026-01-20T01:34:15.212869634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:15.215766 containerd[1611]: time="2026-01-20T01:34:15.215118912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 20 01:34:15.217532 containerd[1611]: time="2026-01-20T01:34:15.217484198Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:15.221489 containerd[1611]: time="2026-01-20T01:34:15.221397765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:15.222366 containerd[1611]: time="2026-01-20T01:34:15.222164400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.08012943s" Jan 20 01:34:15.222366 containerd[1611]: time="2026-01-20T01:34:15.222210035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 01:34:15.236349 containerd[1611]: time="2026-01-20T01:34:15.235349290Z" level=info msg="CreateContainer within sandbox \"ddf5831de6e981e42f61570964c68ed5bfa9a858d6eb14cdae016f043b537bd8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 01:34:15.301619 containerd[1611]: time="2026-01-20T01:34:15.301500896Z" level=info msg="Container 63a618234ba636984f0d0c858aa3cde275fba81b9f304eb877dbd9268c28c8b8: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:34:15.324669 containerd[1611]: time="2026-01-20T01:34:15.324563006Z" level=info msg="CreateContainer within sandbox \"ddf5831de6e981e42f61570964c68ed5bfa9a858d6eb14cdae016f043b537bd8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"63a618234ba636984f0d0c858aa3cde275fba81b9f304eb877dbd9268c28c8b8\"" Jan 20 01:34:15.326595 containerd[1611]: time="2026-01-20T01:34:15.325621510Z" level=info msg="StartContainer for \"63a618234ba636984f0d0c858aa3cde275fba81b9f304eb877dbd9268c28c8b8\"" Jan 20 01:34:15.327905 containerd[1611]: time="2026-01-20T01:34:15.327787153Z" level=info msg="connecting to shim 63a618234ba636984f0d0c858aa3cde275fba81b9f304eb877dbd9268c28c8b8" address="unix:///run/containerd/s/196fe3f01dca3babb0681b1df9d4fc054e4f608ca54fc107b2007b5298f467d9" protocol=ttrpc version=3 Jan 20 01:34:15.370731 systemd[1]: Started cri-containerd-63a618234ba636984f0d0c858aa3cde275fba81b9f304eb877dbd9268c28c8b8.scope - libcontainer container 63a618234ba636984f0d0c858aa3cde275fba81b9f304eb877dbd9268c28c8b8. Jan 20 01:34:15.479000 audit: BPF prog-id=170 op=LOAD Jan 20 01:34:15.484293 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 01:34:15.484507 kernel: audit: type=1334 audit(1768872855.479:553): prog-id=170 op=LOAD Jan 20 01:34:15.479000 audit[3886]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3312 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:15.498939 kernel: audit: type=1300 audit(1768872855.479:553): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3312 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:15.499076 kernel: audit: type=1327 audit(1768872855.479:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613631383233346261363336393834663064306338353861613363 Jan 20 01:34:15.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613631383233346261363336393834663064306338353861613363 Jan 20 01:34:15.515439 kernel: audit: type=1334 audit(1768872855.480:554): prog-id=171 op=LOAD Jan 20 01:34:15.480000 audit: BPF prog-id=171 op=LOAD Jan 20 01:34:15.480000 audit[3886]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3312 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:15.527511 kernel: audit: type=1300 audit(1768872855.480:554): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3312 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:15.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613631383233346261363336393834663064306338353861613363 Jan 20 01:34:15.539049 kernel: audit: type=1327 audit(1768872855.480:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613631383233346261363336393834663064306338353861613363 Jan 20 01:34:15.539182 kernel: audit: type=1334 audit(1768872855.480:555): prog-id=171 op=UNLOAD Jan 20 01:34:15.480000 audit: BPF prog-id=171 op=UNLOAD Jan 20 01:34:15.543208 kernel: audit: type=1300 audit(1768872855.480:555): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3312 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:15.480000 audit[3886]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3312 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:15.554453 containerd[1611]: time="2026-01-20T01:34:15.554361049Z" level=info msg="StartContainer for \"63a618234ba636984f0d0c858aa3cde275fba81b9f304eb877dbd9268c28c8b8\" returns successfully" Jan 20 01:34:15.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613631383233346261363336393834663064306338353861613363 Jan 20 01:34:15.566678 kernel: audit: type=1327 audit(1768872855.480:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613631383233346261363336393834663064306338353861613363 Jan 20 01:34:15.566872 kernel: audit: type=1334 audit(1768872855.480:556): prog-id=170 op=UNLOAD Jan 20 01:34:15.480000 audit: BPF prog-id=170 op=UNLOAD Jan 20 01:34:15.480000 audit[3886]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3312 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:15.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613631383233346261363336393834663064306338353861613363 Jan 20 01:34:15.480000 audit: BPF prog-id=172 op=LOAD Jan 20 01:34:15.480000 audit[3886]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3312 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:15.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613631383233346261363336393834663064306338353861613363 Jan 20 01:34:15.694818 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 01:34:15.694977 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 01:34:15.984321 kubelet[2780]: I0120 01:34:15.984186 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4wfz\" (UniqueName: \"kubernetes.io/projected/a2ca7657-636b-49d1-99cd-fdd7e6e260be-kube-api-access-h4wfz\") pod \"a2ca7657-636b-49d1-99cd-fdd7e6e260be\" (UID: \"a2ca7657-636b-49d1-99cd-fdd7e6e260be\") " Jan 20 01:34:15.984321 kubelet[2780]: I0120 01:34:15.984265 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2ca7657-636b-49d1-99cd-fdd7e6e260be-whisker-backend-key-pair\") pod \"a2ca7657-636b-49d1-99cd-fdd7e6e260be\" (UID: \"a2ca7657-636b-49d1-99cd-fdd7e6e260be\") " Jan 20 01:34:15.984321 kubelet[2780]: I0120 01:34:15.984294 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2ca7657-636b-49d1-99cd-fdd7e6e260be-whisker-ca-bundle\") pod \"a2ca7657-636b-49d1-99cd-fdd7e6e260be\" (UID: \"a2ca7657-636b-49d1-99cd-fdd7e6e260be\") " Jan 20 01:34:15.986831 kubelet[2780]: I0120 01:34:15.986684 2780 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2ca7657-636b-49d1-99cd-fdd7e6e260be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a2ca7657-636b-49d1-99cd-fdd7e6e260be" (UID: "a2ca7657-636b-49d1-99cd-fdd7e6e260be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:34:15.994143 kubelet[2780]: I0120 01:34:15.992137 2780 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ca7657-636b-49d1-99cd-fdd7e6e260be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a2ca7657-636b-49d1-99cd-fdd7e6e260be" (UID: "a2ca7657-636b-49d1-99cd-fdd7e6e260be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 01:34:15.993624 systemd[1]: var-lib-kubelet-pods-a2ca7657\x2d636b\x2d49d1\x2d99cd\x2dfdd7e6e260be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 01:34:15.995263 kubelet[2780]: I0120 01:34:15.995210 2780 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ca7657-636b-49d1-99cd-fdd7e6e260be-kube-api-access-h4wfz" (OuterVolumeSpecName: "kube-api-access-h4wfz") pod "a2ca7657-636b-49d1-99cd-fdd7e6e260be" (UID: "a2ca7657-636b-49d1-99cd-fdd7e6e260be"). InnerVolumeSpecName "kube-api-access-h4wfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:34:15.998049 systemd[1]: var-lib-kubelet-pods-a2ca7657\x2d636b\x2d49d1\x2d99cd\x2dfdd7e6e260be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh4wfz.mount: Deactivated successfully. Jan 20 01:34:16.085266 kubelet[2780]: I0120 01:34:16.085212 2780 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h4wfz\" (UniqueName: \"kubernetes.io/projected/a2ca7657-636b-49d1-99cd-fdd7e6e260be-kube-api-access-h4wfz\") on node \"localhost\" DevicePath \"\"" Jan 20 01:34:16.085266 kubelet[2780]: I0120 01:34:16.085251 2780 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2ca7657-636b-49d1-99cd-fdd7e6e260be-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 01:34:16.085266 kubelet[2780]: I0120 01:34:16.085262 2780 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2ca7657-636b-49d1-99cd-fdd7e6e260be-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 01:34:16.180918 kubelet[2780]: E0120 01:34:16.180859 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:16.189900 systemd[1]: Removed slice kubepods-besteffort-poda2ca7657_636b_49d1_99cd_fdd7e6e260be.slice - libcontainer container kubepods-besteffort-poda2ca7657_636b_49d1_99cd_fdd7e6e260be.slice. Jan 20 01:34:16.207011 kubelet[2780]: I0120 01:34:16.206941 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bpvml" podStartSLOduration=1.900131328 podStartE2EDuration="18.206910065s" podCreationTimestamp="2026-01-20 01:33:58 +0000 UTC" firstStartedPulling="2026-01-20 01:33:58.917006724 +0000 UTC m=+21.361517523" lastFinishedPulling="2026-01-20 01:34:15.223785462 +0000 UTC m=+37.668296260" observedRunningTime="2026-01-20 01:34:16.205687563 +0000 UTC m=+38.650198361" watchObservedRunningTime="2026-01-20 01:34:16.206910065 +0000 UTC m=+38.651420863" Jan 20 01:34:16.288900 systemd[1]: Created slice kubepods-besteffort-pod49316a51_69bf_4cd8_a713_083d988333bb.slice - libcontainer container kubepods-besteffort-pod49316a51_69bf_4cd8_a713_083d988333bb.slice. Jan 20 01:34:16.388867 kubelet[2780]: I0120 01:34:16.388808 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/49316a51-69bf-4cd8-a713-083d988333bb-whisker-backend-key-pair\") pod \"whisker-b9db9c79-llb9v\" (UID: \"49316a51-69bf-4cd8-a713-083d988333bb\") " pod="calico-system/whisker-b9db9c79-llb9v" Jan 20 01:34:16.388867 kubelet[2780]: I0120 01:34:16.388873 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mchlw\" (UniqueName: \"kubernetes.io/projected/49316a51-69bf-4cd8-a713-083d988333bb-kube-api-access-mchlw\") pod \"whisker-b9db9c79-llb9v\" (UID: \"49316a51-69bf-4cd8-a713-083d988333bb\") " pod="calico-system/whisker-b9db9c79-llb9v" Jan 20 01:34:16.388867 kubelet[2780]: I0120 01:34:16.388909 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49316a51-69bf-4cd8-a713-083d988333bb-whisker-ca-bundle\") pod \"whisker-b9db9c79-llb9v\" (UID: \"49316a51-69bf-4cd8-a713-083d988333bb\") " pod="calico-system/whisker-b9db9c79-llb9v" Jan 20 01:34:16.595828 containerd[1611]: time="2026-01-20T01:34:16.595029939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b9db9c79-llb9v,Uid:49316a51-69bf-4cd8-a713-083d988333bb,Namespace:calico-system,Attempt:0,}" Jan 20 01:34:16.926535 systemd-networkd[1506]: calie9fc4d41fc4: Link UP Jan 20 01:34:16.927874 systemd-networkd[1506]: calie9fc4d41fc4: Gained carrier Jan 20 01:34:16.947544 containerd[1611]: 2026-01-20 01:34:16.643 [INFO][3956] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:34:16.947544 containerd[1611]: 2026-01-20 01:34:16.685 [INFO][3956] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--b9db9c79--llb9v-eth0 whisker-b9db9c79- calico-system 49316a51-69bf-4cd8-a713-083d988333bb 917 0 2026-01-20 01:34:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b9db9c79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-b9db9c79-llb9v eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie9fc4d41fc4 [] [] }} ContainerID="7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" Namespace="calico-system" Pod="whisker-b9db9c79-llb9v" WorkloadEndpoint="localhost-k8s-whisker--b9db9c79--llb9v-" Jan 20 01:34:16.947544 containerd[1611]: 2026-01-20 01:34:16.685 [INFO][3956] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" Namespace="calico-system" Pod="whisker-b9db9c79-llb9v" WorkloadEndpoint="localhost-k8s-whisker--b9db9c79--llb9v-eth0" Jan 20 01:34:16.947544 containerd[1611]: 2026-01-20 01:34:16.844 [INFO][3970] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" HandleID="k8s-pod-network.7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" Workload="localhost-k8s-whisker--b9db9c79--llb9v-eth0" Jan 20 01:34:16.948314 containerd[1611]: 2026-01-20 01:34:16.846 [INFO][3970] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" HandleID="k8s-pod-network.7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" Workload="localhost-k8s-whisker--b9db9c79--llb9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036c150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-b9db9c79-llb9v", "timestamp":"2026-01-20 01:34:16.844166967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:34:16.948314 containerd[1611]: 2026-01-20 01:34:16.846 [INFO][3970] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:34:16.948314 containerd[1611]: 2026-01-20 01:34:16.846 [INFO][3970] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:34:16.948314 containerd[1611]: 2026-01-20 01:34:16.847 [INFO][3970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:34:16.948314 containerd[1611]: 2026-01-20 01:34:16.861 [INFO][3970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" host="localhost" Jan 20 01:34:16.948314 containerd[1611]: 2026-01-20 01:34:16.871 [INFO][3970] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:34:16.948314 containerd[1611]: 2026-01-20 01:34:16.880 [INFO][3970] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:34:16.948314 containerd[1611]: 2026-01-20 01:34:16.885 [INFO][3970] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:16.948314 containerd[1611]: 2026-01-20 01:34:16.889 [INFO][3970] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:16.948314 containerd[1611]: 2026-01-20 01:34:16.889 [INFO][3970] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" host="localhost" Jan 20 01:34:16.948759 containerd[1611]: 2026-01-20 01:34:16.892 [INFO][3970] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7 Jan 20 01:34:16.948759 containerd[1611]: 2026-01-20 01:34:16.898 [INFO][3970] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" host="localhost" Jan 20 01:34:16.948759 containerd[1611]: 2026-01-20 01:34:16.909 [INFO][3970] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" host="localhost" Jan 20 01:34:16.948759 containerd[1611]: 2026-01-20 01:34:16.909 [INFO][3970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" host="localhost" Jan 20 01:34:16.948759 containerd[1611]: 2026-01-20 01:34:16.909 [INFO][3970] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:34:16.948759 containerd[1611]: 2026-01-20 01:34:16.909 [INFO][3970] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" HandleID="k8s-pod-network.7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" Workload="localhost-k8s-whisker--b9db9c79--llb9v-eth0" Jan 20 01:34:16.949061 containerd[1611]: 2026-01-20 01:34:16.912 [INFO][3956] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" Namespace="calico-system" Pod="whisker-b9db9c79-llb9v" WorkloadEndpoint="localhost-k8s-whisker--b9db9c79--llb9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b9db9c79--llb9v-eth0", GenerateName:"whisker-b9db9c79-", Namespace:"calico-system", SelfLink:"", UID:"49316a51-69bf-4cd8-a713-083d988333bb", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b9db9c79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-b9db9c79-llb9v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie9fc4d41fc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:16.949061 containerd[1611]: 2026-01-20 01:34:16.912 [INFO][3956] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" Namespace="calico-system" Pod="whisker-b9db9c79-llb9v" WorkloadEndpoint="localhost-k8s-whisker--b9db9c79--llb9v-eth0" Jan 20 01:34:16.949255 containerd[1611]: 2026-01-20 01:34:16.913 [INFO][3956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9fc4d41fc4 ContainerID="7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" Namespace="calico-system" Pod="whisker-b9db9c79-llb9v" WorkloadEndpoint="localhost-k8s-whisker--b9db9c79--llb9v-eth0" Jan 20 01:34:16.949255 containerd[1611]: 2026-01-20 01:34:16.926 [INFO][3956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" Namespace="calico-system" Pod="whisker-b9db9c79-llb9v" WorkloadEndpoint="localhost-k8s-whisker--b9db9c79--llb9v-eth0" Jan 20 01:34:16.949346 containerd[1611]: 2026-01-20 01:34:16.927 [INFO][3956] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" Namespace="calico-system" Pod="whisker-b9db9c79-llb9v" WorkloadEndpoint="localhost-k8s-whisker--b9db9c79--llb9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b9db9c79--llb9v-eth0", GenerateName:"whisker-b9db9c79-", Namespace:"calico-system", SelfLink:"", UID:"49316a51-69bf-4cd8-a713-083d988333bb", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 34, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b9db9c79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7", Pod:"whisker-b9db9c79-llb9v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie9fc4d41fc4", MAC:"92:b7:93:b8:e0:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:16.949449 containerd[1611]: 2026-01-20 01:34:16.941 [INFO][3956] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" Namespace="calico-system" Pod="whisker-b9db9c79-llb9v" WorkloadEndpoint="localhost-k8s-whisker--b9db9c79--llb9v-eth0" Jan 20 01:34:17.124872 containerd[1611]: time="2026-01-20T01:34:17.124481243Z" level=info msg="connecting to shim 7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7" address="unix:///run/containerd/s/2629b46dc533d92673a8cc2dc4c25eb49eade272fbd90adb1fe2b5c41ab1db62" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:34:17.191491 kubelet[2780]: E0120 01:34:17.191406 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:17.196201 systemd[1]: Started cri-containerd-7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7.scope - libcontainer container 7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7. Jan 20 01:34:17.231000 audit: BPF prog-id=173 op=LOAD Jan 20 01:34:17.232000 audit: BPF prog-id=174 op=LOAD Jan 20 01:34:17.232000 audit[4027]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3993 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383138393165373239363861366437666665666235656134313036 Jan 20 01:34:17.232000 audit: BPF prog-id=174 op=UNLOAD Jan 20 01:34:17.232000 audit[4027]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3993 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383138393165373239363861366437666665666235656134313036 Jan 20 01:34:17.233000 audit: BPF prog-id=175 op=LOAD Jan 20 01:34:17.233000 audit[4027]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3993 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383138393165373239363861366437666665666235656134313036 Jan 20 01:34:17.233000 audit: BPF prog-id=176 op=LOAD Jan 20 01:34:17.233000 audit[4027]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3993 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383138393165373239363861366437666665666235656134313036 Jan 20 01:34:17.234000 audit: BPF prog-id=176 op=UNLOAD Jan 20 01:34:17.234000 audit[4027]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3993 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383138393165373239363861366437666665666235656134313036 Jan 20 01:34:17.234000 audit: BPF prog-id=175 op=UNLOAD Jan 20 01:34:17.234000 audit[4027]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3993 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383138393165373239363861366437666665666235656134313036 Jan 20 01:34:17.234000 audit: BPF prog-id=177 op=LOAD Jan 20 01:34:17.234000 audit[4027]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3993 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383138393165373239363861366437666665666235656134313036 Jan 20 01:34:17.238430 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:34:17.403267 containerd[1611]: time="2026-01-20T01:34:17.402332442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b9db9c79-llb9v,Uid:49316a51-69bf-4cd8-a713-083d988333bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"7781891e72968a6d7ffefb5ea410666db9c6e849c9205377e25a144f4670bbc7\"" Jan 20 01:34:17.409901 containerd[1611]: time="2026-01-20T01:34:17.409235619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:34:17.472000 audit: BPF prog-id=178 op=LOAD Jan 20 01:34:17.472000 audit[4162]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd08f219a0 a2=98 a3=1fffffffffffffff items=0 ppid=4044 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.472000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:34:17.472000 audit: BPF prog-id=178 op=UNLOAD Jan 20 01:34:17.472000 audit[4162]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd08f21970 a3=0 items=0 ppid=4044 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.472000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:34:17.472000 audit: BPF prog-id=179 op=LOAD Jan 20 01:34:17.472000 audit[4162]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd08f21880 a2=94 a3=3 items=0 ppid=4044 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.472000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:34:17.472000 audit: BPF prog-id=179 op=UNLOAD Jan 20 01:34:17.472000 audit[4162]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd08f21880 a2=94 a3=3 items=0 ppid=4044 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.472000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:34:17.472000 audit: BPF prog-id=180 op=LOAD Jan 20 01:34:17.472000 audit[4162]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd08f218c0 a2=94 a3=7ffd08f21aa0 items=0 ppid=4044 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.472000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:34:17.472000 audit: BPF prog-id=180 op=UNLOAD Jan 20 01:34:17.472000 audit[4162]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd08f218c0 a2=94 a3=7ffd08f21aa0 items=0 ppid=4044 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.472000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:34:17.482000 audit: BPF prog-id=181 op=LOAD Jan 20 01:34:17.482000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9feaeab0 a2=98 a3=3 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.482000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.483000 audit: BPF prog-id=181 op=UNLOAD Jan 20 01:34:17.483000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe9feaea80 a3=0 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.483000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.483000 audit: BPF prog-id=182 op=LOAD Jan 20 01:34:17.483000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9feae8a0 a2=94 a3=54428f items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.483000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.484000 audit: BPF prog-id=182 op=UNLOAD Jan 20 01:34:17.484000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe9feae8a0 a2=94 a3=54428f items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.484000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.484000 audit: BPF prog-id=183 op=LOAD Jan 20 01:34:17.484000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9feae8d0 a2=94 a3=2 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.484000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.484000 audit: BPF prog-id=183 op=UNLOAD Jan 20 01:34:17.484000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe9feae8d0 a2=0 a3=2 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.484000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.487803 containerd[1611]: time="2026-01-20T01:34:17.487641742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:17.490520 containerd[1611]: time="2026-01-20T01:34:17.490390784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:17.490791 containerd[1611]: time="2026-01-20T01:34:17.490622697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:34:17.491544 kubelet[2780]: E0120 01:34:17.491465 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:34:17.491663 kubelet[2780]: E0120 01:34:17.491575 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:34:17.497324 kubelet[2780]: E0120 01:34:17.497199 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:50d0da9d3db140cc8836270eb3a85a60,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mchlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b9db9c79-llb9v_calico-system(49316a51-69bf-4cd8-a713-083d988333bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:17.500501 containerd[1611]: time="2026-01-20T01:34:17.500462837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:34:17.569327 containerd[1611]: time="2026-01-20T01:34:17.569235438Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:17.587832 containerd[1611]: time="2026-01-20T01:34:17.586948826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:34:17.587832 containerd[1611]: time="2026-01-20T01:34:17.586994501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:17.588025 kubelet[2780]: E0120 01:34:17.587241 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:34:17.588025 kubelet[2780]: E0120 01:34:17.587301 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:34:17.588186 kubelet[2780]: E0120 01:34:17.587422 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mchlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b9db9c79-llb9v_calico-system(49316a51-69bf-4cd8-a713-083d988333bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:17.589862 kubelet[2780]: E0120 01:34:17.588767 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:34:17.703373 containerd[1611]: time="2026-01-20T01:34:17.702566386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rs9sl,Uid:93c423b9-f734-475b-aea9-f003af7097a2,Namespace:calico-system,Attempt:0,}" Jan 20 01:34:17.703891 containerd[1611]: time="2026-01-20T01:34:17.703471753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8dd7d667-ct8ff,Uid:c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:34:17.706392 kubelet[2780]: I0120 01:34:17.706234 2780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2ca7657-636b-49d1-99cd-fdd7e6e260be" path="/var/lib/kubelet/pods/a2ca7657-636b-49d1-99cd-fdd7e6e260be/volumes" Jan 20 01:34:17.750000 audit: BPF prog-id=184 op=LOAD Jan 20 01:34:17.750000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9feae790 a2=94 a3=1 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.750000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.751000 audit: BPF prog-id=184 op=UNLOAD Jan 20 01:34:17.751000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe9feae790 a2=94 a3=1 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.751000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.764000 audit: BPF prog-id=185 op=LOAD Jan 20 01:34:17.764000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe9feae780 a2=94 a3=4 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.764000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.768000 audit: BPF prog-id=185 op=UNLOAD Jan 20 01:34:17.768000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe9feae780 a2=0 a3=4 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.768000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.769000 audit: BPF prog-id=186 op=LOAD Jan 20 01:34:17.769000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe9feae5e0 a2=94 a3=5 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.769000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.769000 audit: BPF prog-id=186 op=UNLOAD Jan 20 01:34:17.769000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe9feae5e0 a2=0 a3=5 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.769000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.769000 audit: BPF prog-id=187 op=LOAD Jan 20 01:34:17.769000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe9feae800 a2=94 a3=6 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.769000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.769000 audit: BPF prog-id=187 op=UNLOAD Jan 20 01:34:17.769000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe9feae800 a2=0 a3=6 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.769000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.770000 audit: BPF prog-id=188 op=LOAD Jan 20 01:34:17.770000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe9feadfb0 a2=94 a3=88 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.770000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.771000 audit: BPF prog-id=189 op=LOAD Jan 20 01:34:17.771000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe9feade30 a2=94 a3=2 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.771000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.771000 audit: BPF prog-id=189 op=UNLOAD Jan 20 01:34:17.771000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe9feade60 a2=0 a3=7ffe9feadf60 items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.771000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.772000 audit: BPF prog-id=188 op=UNLOAD Jan 20 01:34:17.772000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=ace2d10 a2=0 a3=d918d83fe25cabfb items=0 ppid=4044 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.772000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:34:17.793000 audit: BPF prog-id=190 op=LOAD Jan 20 01:34:17.793000 audit[4218]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe20195a00 a2=98 a3=1999999999999999 items=0 ppid=4044 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.793000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:34:17.794000 audit: BPF prog-id=190 op=UNLOAD Jan 20 01:34:17.794000 audit[4218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe201959d0 a3=0 items=0 ppid=4044 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.794000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:34:17.795000 audit: BPF prog-id=191 op=LOAD Jan 20 01:34:17.795000 audit[4218]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe201958e0 a2=94 a3=ffff items=0 ppid=4044 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.795000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:34:17.795000 audit: BPF prog-id=191 op=UNLOAD Jan 20 01:34:17.795000 audit[4218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe201958e0 a2=94 a3=ffff items=0 ppid=4044 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.795000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:34:17.795000 audit: BPF prog-id=192 op=LOAD Jan 20 01:34:17.795000 audit[4218]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe20195920 a2=94 a3=7ffe20195b00 items=0 ppid=4044 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.795000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:34:17.795000 audit: BPF prog-id=192 op=UNLOAD Jan 20 01:34:17.795000 audit[4218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe20195920 a2=94 a3=7ffe20195b00 items=0 ppid=4044 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:17.795000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:34:17.953330 systemd-networkd[1506]: vxlan.calico: Link UP Jan 20 01:34:17.953347 systemd-networkd[1506]: vxlan.calico: Gained carrier Jan 20 01:34:18.017060 systemd-networkd[1506]: cali8ac9b45ef89: Link UP Jan 20 01:34:18.025509 systemd-networkd[1506]: cali8ac9b45ef89: Gained carrier Jan 20 01:34:18.050903 systemd-networkd[1506]: calie9fc4d41fc4: Gained IPv6LL Jan 20 01:34:18.052000 audit: BPF prog-id=193 op=LOAD Jan 20 01:34:18.052000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe49e3bc00 a2=98 a3=0 items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.052000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.052000 audit: BPF prog-id=193 op=UNLOAD Jan 20 01:34:18.052000 audit[4263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe49e3bbd0 a3=0 items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.052000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.052000 audit: BPF prog-id=194 op=LOAD Jan 20 01:34:18.052000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe49e3ba10 a2=94 a3=54428f items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.052000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.052000 audit: BPF prog-id=194 op=UNLOAD Jan 20 01:34:18.052000 audit[4263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe49e3ba10 a2=94 a3=54428f items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.052000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.053000 audit: BPF prog-id=195 op=LOAD Jan 20 01:34:18.053000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe49e3ba40 a2=94 a3=2 items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.053000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.053000 audit: BPF prog-id=195 op=UNLOAD Jan 20 01:34:18.053000 audit[4263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe49e3ba40 a2=0 a3=2 items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.053000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.053000 audit: BPF prog-id=196 op=LOAD Jan 20 01:34:18.053000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe49e3b7f0 a2=94 a3=4 items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.053000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.053000 audit: BPF prog-id=196 op=UNLOAD Jan 20 01:34:18.053000 audit[4263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe49e3b7f0 a2=94 a3=4 items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.053000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.053000 audit: BPF prog-id=197 op=LOAD Jan 20 01:34:18.053000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe49e3b8f0 a2=94 a3=7ffe49e3ba70 items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.053000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.053000 audit: BPF prog-id=197 op=UNLOAD Jan 20 01:34:18.053000 audit[4263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe49e3b8f0 a2=0 a3=7ffe49e3ba70 items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.053000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.054000 audit: BPF prog-id=198 op=LOAD Jan 20 01:34:18.054000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe49e3b020 a2=94 a3=2 items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.054000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.054000 audit: BPF prog-id=198 op=UNLOAD Jan 20 01:34:18.054000 audit[4263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe49e3b020 a2=0 a3=2 items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.054000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.055000 audit: BPF prog-id=199 op=LOAD Jan 20 01:34:18.055000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe49e3b120 a2=94 a3=30 items=0 ppid=4044 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.055000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:34:18.073588 containerd[1611]: 2026-01-20 01:34:17.776 [INFO][4188] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--rs9sl-eth0 goldmane-666569f655- calico-system 93c423b9-f734-475b-aea9-f003af7097a2 827 0 2026-01-20 01:33:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-rs9sl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8ac9b45ef89 [] [] }} ContainerID="f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" Namespace="calico-system" Pod="goldmane-666569f655-rs9sl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rs9sl-" Jan 20 01:34:18.073588 containerd[1611]: 2026-01-20 01:34:17.776 [INFO][4188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" Namespace="calico-system" Pod="goldmane-666569f655-rs9sl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rs9sl-eth0" Jan 20 01:34:18.073588 containerd[1611]: 2026-01-20 01:34:17.865 [INFO][4220] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" HandleID="k8s-pod-network.f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" Workload="localhost-k8s-goldmane--666569f655--rs9sl-eth0" Jan 20 01:34:18.073889 containerd[1611]: 2026-01-20 01:34:17.866 [INFO][4220] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" HandleID="k8s-pod-network.f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" Workload="localhost-k8s-goldmane--666569f655--rs9sl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e5ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-rs9sl", "timestamp":"2026-01-20 01:34:17.865853688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:34:18.073889 containerd[1611]: 2026-01-20 01:34:17.866 [INFO][4220] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:34:18.073889 containerd[1611]: 2026-01-20 01:34:17.866 [INFO][4220] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:34:18.073889 containerd[1611]: 2026-01-20 01:34:17.866 [INFO][4220] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:34:18.073889 containerd[1611]: 2026-01-20 01:34:17.885 [INFO][4220] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" host="localhost" Jan 20 01:34:18.073889 containerd[1611]: 2026-01-20 01:34:17.901 [INFO][4220] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:34:18.073889 containerd[1611]: 2026-01-20 01:34:17.915 [INFO][4220] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:34:18.073889 containerd[1611]: 2026-01-20 01:34:17.921 [INFO][4220] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:18.073889 containerd[1611]: 2026-01-20 01:34:17.926 [INFO][4220] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:18.073889 containerd[1611]: 2026-01-20 01:34:17.927 [INFO][4220] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" host="localhost" Jan 20 01:34:18.075219 containerd[1611]: 2026-01-20 01:34:17.936 [INFO][4220] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8 Jan 20 01:34:18.075219 containerd[1611]: 2026-01-20 01:34:17.951 [INFO][4220] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" host="localhost" Jan 20 01:34:18.075219 containerd[1611]: 2026-01-20 01:34:17.989 [INFO][4220] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" host="localhost" Jan 20 01:34:18.075219 containerd[1611]: 2026-01-20 01:34:17.989 [INFO][4220] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" host="localhost" Jan 20 01:34:18.075219 containerd[1611]: 2026-01-20 01:34:17.989 [INFO][4220] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:34:18.075219 containerd[1611]: 2026-01-20 01:34:17.989 [INFO][4220] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" HandleID="k8s-pod-network.f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" Workload="localhost-k8s-goldmane--666569f655--rs9sl-eth0" Jan 20 01:34:18.075410 containerd[1611]: 2026-01-20 01:34:17.997 [INFO][4188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" Namespace="calico-system" Pod="goldmane-666569f655-rs9sl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rs9sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--rs9sl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"93c423b9-f734-475b-aea9-f003af7097a2", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-rs9sl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ac9b45ef89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:18.075410 containerd[1611]: 2026-01-20 01:34:17.997 [INFO][4188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" Namespace="calico-system" Pod="goldmane-666569f655-rs9sl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rs9sl-eth0" Jan 20 01:34:18.075572 containerd[1611]: 2026-01-20 01:34:17.997 [INFO][4188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ac9b45ef89 ContainerID="f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" Namespace="calico-system" Pod="goldmane-666569f655-rs9sl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rs9sl-eth0" Jan 20 01:34:18.075572 containerd[1611]: 2026-01-20 01:34:18.020 [INFO][4188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" Namespace="calico-system" Pod="goldmane-666569f655-rs9sl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rs9sl-eth0" Jan 20 01:34:18.075642 containerd[1611]: 2026-01-20 01:34:18.022 [INFO][4188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" Namespace="calico-system" Pod="goldmane-666569f655-rs9sl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rs9sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--rs9sl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"93c423b9-f734-475b-aea9-f003af7097a2", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8", Pod:"goldmane-666569f655-rs9sl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ac9b45ef89", MAC:"0e:84:82:b4:1b:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:18.075782 containerd[1611]: 2026-01-20 01:34:18.069 [INFO][4188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" Namespace="calico-system" Pod="goldmane-666569f655-rs9sl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--rs9sl-eth0" Jan 20 01:34:18.082000 audit: BPF prog-id=200 op=LOAD Jan 20 01:34:18.082000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb485fe40 a2=98 a3=0 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.082000 audit: BPF prog-id=200 op=UNLOAD Jan 20 01:34:18.082000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeb485fe10 a3=0 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.084000 audit: BPF prog-id=201 op=LOAD Jan 20 01:34:18.084000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb485fc30 a2=94 a3=54428f items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.084000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.084000 audit: BPF prog-id=201 op=UNLOAD Jan 20 01:34:18.084000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeb485fc30 a2=94 a3=54428f items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.084000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.084000 audit: BPF prog-id=202 op=LOAD Jan 20 01:34:18.084000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb485fc60 a2=94 a3=2 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.084000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.084000 audit: BPF prog-id=202 op=UNLOAD Jan 20 01:34:18.084000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeb485fc60 a2=0 a3=2 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.084000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.175286 containerd[1611]: time="2026-01-20T01:34:18.175203233Z" level=info msg="connecting to shim f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8" address="unix:///run/containerd/s/cdfa9589b13c5199d631184bd2b2220f23b6cca8ce65971a4b9f9de230a87c47" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:34:18.229188 kubelet[2780]: E0120 01:34:18.217519 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:18.231382 systemd-networkd[1506]: calif5ce8e97758: Link UP Jan 20 01:34:18.231956 systemd-networkd[1506]: calif5ce8e97758: Gained carrier Jan 20 01:34:18.238570 kubelet[2780]: E0120 01:34:18.238493 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:34:18.284167 containerd[1611]: 2026-01-20 01:34:17.801 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0 calico-apiserver-7c8dd7d667- calico-apiserver c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d 833 0 2026-01-20 01:33:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c8dd7d667 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c8dd7d667-ct8ff eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif5ce8e97758 [] [] }} ContainerID="1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-ct8ff" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-" Jan 20 01:34:18.284167 containerd[1611]: 2026-01-20 01:34:17.804 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-ct8ff" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0" Jan 20 01:34:18.284167 containerd[1611]: 2026-01-20 01:34:17.869 [INFO][4235] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" HandleID="k8s-pod-network.1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" Workload="localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0" Jan 20 01:34:18.284586 containerd[1611]: 2026-01-20 01:34:17.869 [INFO][4235] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" HandleID="k8s-pod-network.1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" Workload="localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003296a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c8dd7d667-ct8ff", "timestamp":"2026-01-20 01:34:17.869259025 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:34:18.284586 containerd[1611]: 2026-01-20 01:34:17.869 [INFO][4235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:34:18.284586 containerd[1611]: 2026-01-20 01:34:17.989 [INFO][4235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:34:18.284586 containerd[1611]: 2026-01-20 01:34:17.989 [INFO][4235] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:34:18.284586 containerd[1611]: 2026-01-20 01:34:18.007 [INFO][4235] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" host="localhost" Jan 20 01:34:18.284586 containerd[1611]: 2026-01-20 01:34:18.065 [INFO][4235] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:34:18.284586 containerd[1611]: 2026-01-20 01:34:18.081 [INFO][4235] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:34:18.284586 containerd[1611]: 2026-01-20 01:34:18.091 [INFO][4235] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:18.284586 containerd[1611]: 2026-01-20 01:34:18.101 [INFO][4235] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:18.284586 containerd[1611]: 2026-01-20 01:34:18.101 [INFO][4235] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" host="localhost" Jan 20 01:34:18.286509 containerd[1611]: 2026-01-20 01:34:18.124 [INFO][4235] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00 Jan 20 01:34:18.286509 containerd[1611]: 2026-01-20 01:34:18.148 [INFO][4235] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" host="localhost" Jan 20 01:34:18.286509 containerd[1611]: 2026-01-20 01:34:18.164 [INFO][4235] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" host="localhost" Jan 20 01:34:18.286509 containerd[1611]: 2026-01-20 01:34:18.164 [INFO][4235] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" host="localhost" Jan 20 01:34:18.286509 containerd[1611]: 2026-01-20 01:34:18.164 [INFO][4235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:34:18.286509 containerd[1611]: 2026-01-20 01:34:18.164 [INFO][4235] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" HandleID="k8s-pod-network.1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" Workload="localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0" Jan 20 01:34:18.286677 containerd[1611]: 2026-01-20 01:34:18.193 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-ct8ff" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0", GenerateName:"calico-apiserver-7c8dd7d667-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8dd7d667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c8dd7d667-ct8ff", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5ce8e97758", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:18.286811 containerd[1611]: 2026-01-20 01:34:18.194 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-ct8ff" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0" Jan 20 01:34:18.286811 containerd[1611]: 2026-01-20 01:34:18.194 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5ce8e97758 ContainerID="1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-ct8ff" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0" Jan 20 01:34:18.286811 containerd[1611]: 2026-01-20 01:34:18.212 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-ct8ff" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0" Jan 20 01:34:18.286918 containerd[1611]: 2026-01-20 01:34:18.212 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-ct8ff" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0", GenerateName:"calico-apiserver-7c8dd7d667-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8dd7d667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00", Pod:"calico-apiserver-7c8dd7d667-ct8ff", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5ce8e97758", MAC:"da:16:66:a5:7c:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:18.287027 containerd[1611]: 2026-01-20 01:34:18.255 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-ct8ff" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--ct8ff-eth0" Jan 20 01:34:18.312000 audit[4319]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:18.312000 audit[4319]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd3b055480 a2=0 a3=7ffd3b05546c items=0 ppid=2936 pid=4319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:18.318000 audit[4319]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:18.318000 audit[4319]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd3b055480 a2=0 a3=0 items=0 ppid=2936 pid=4319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:18.332633 systemd[1]: Started cri-containerd-f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8.scope - libcontainer container f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8. Jan 20 01:34:18.352526 containerd[1611]: time="2026-01-20T01:34:18.352197556Z" level=info msg="connecting to shim 1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00" address="unix:///run/containerd/s/1eeebb8719bb63fc3d35440acf7f73c2aa7acbc47cea93b28a7ff7cc5de9a630" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:34:18.391000 audit: BPF prog-id=203 op=LOAD Jan 20 01:34:18.392000 audit: BPF prog-id=204 op=LOAD Jan 20 01:34:18.392000 audit[4300]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4285 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635306130623562326531383661653964383639626436653065356663 Jan 20 01:34:18.392000 audit: BPF prog-id=204 op=UNLOAD Jan 20 01:34:18.392000 audit[4300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4285 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635306130623562326531383661653964383639626436653065356663 Jan 20 01:34:18.393000 audit: BPF prog-id=205 op=LOAD Jan 20 01:34:18.393000 audit[4300]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4285 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635306130623562326531383661653964383639626436653065356663 Jan 20 01:34:18.394000 audit: BPF prog-id=206 op=LOAD Jan 20 01:34:18.394000 audit[4300]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4285 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635306130623562326531383661653964383639626436653065356663 Jan 20 01:34:18.394000 audit: BPF prog-id=206 op=UNLOAD Jan 20 01:34:18.394000 audit[4300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4285 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635306130623562326531383661653964383639626436653065356663 Jan 20 01:34:18.394000 audit: BPF prog-id=205 op=UNLOAD Jan 20 01:34:18.394000 audit[4300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4285 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635306130623562326531383661653964383639626436653065356663 Jan 20 01:34:18.395000 audit: BPF prog-id=207 op=LOAD Jan 20 01:34:18.395000 audit[4300]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4285 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635306130623562326531383661653964383639626436653065356663 Jan 20 01:34:18.403694 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:34:18.425792 systemd[1]: Started cri-containerd-1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00.scope - libcontainer container 1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00. Jan 20 01:34:18.486000 audit: BPF prog-id=208 op=LOAD Jan 20 01:34:18.487000 audit: BPF prog-id=209 op=LOAD Jan 20 01:34:18.487000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4356 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343662626231363538623231656361363932383863666162666432 Jan 20 01:34:18.487000 audit: BPF prog-id=209 op=UNLOAD Jan 20 01:34:18.487000 audit[4373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4356 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343662626231363538623231656361363932383863666162666432 Jan 20 01:34:18.488000 audit: BPF prog-id=210 op=LOAD Jan 20 01:34:18.488000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4356 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343662626231363538623231656361363932383863666162666432 Jan 20 01:34:18.488000 audit: BPF prog-id=211 op=LOAD Jan 20 01:34:18.488000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4356 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343662626231363538623231656361363932383863666162666432 Jan 20 01:34:18.488000 audit: BPF prog-id=211 op=UNLOAD Jan 20 01:34:18.488000 audit[4373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4356 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343662626231363538623231656361363932383863666162666432 Jan 20 01:34:18.488000 audit: BPF prog-id=210 op=UNLOAD Jan 20 01:34:18.488000 audit[4373]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4356 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343662626231363538623231656361363932383863666162666432 Jan 20 01:34:18.488000 audit: BPF prog-id=212 op=LOAD Jan 20 01:34:18.488000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4356 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343662626231363538623231656361363932383863666162666432 Jan 20 01:34:18.491332 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:34:18.495619 containerd[1611]: time="2026-01-20T01:34:18.495579047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rs9sl,Uid:93c423b9-f734-475b-aea9-f003af7097a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"f50a0b5b2e186ae9d869bd6e0e5fc6e4ca3c13a81d89b53791a07f13f894d2e8\"" Jan 20 01:34:18.510196 containerd[1611]: time="2026-01-20T01:34:18.508674250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:34:18.526000 audit: BPF prog-id=213 op=LOAD Jan 20 01:34:18.526000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb485fb20 a2=94 a3=1 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.526000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.527000 audit: BPF prog-id=213 op=UNLOAD Jan 20 01:34:18.527000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeb485fb20 a2=94 a3=1 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.527000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.547000 audit: BPF prog-id=214 op=LOAD Jan 20 01:34:18.547000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeb485fb10 a2=94 a3=4 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.547000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.547000 audit: BPF prog-id=214 op=UNLOAD Jan 20 01:34:18.547000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffeb485fb10 a2=0 a3=4 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.547000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.548000 audit: BPF prog-id=215 op=LOAD Jan 20 01:34:18.548000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb485f970 a2=94 a3=5 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.548000 audit: BPF prog-id=215 op=UNLOAD Jan 20 01:34:18.548000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb485f970 a2=0 a3=5 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.548000 audit: BPF prog-id=216 op=LOAD Jan 20 01:34:18.548000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeb485fb90 a2=94 a3=6 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.548000 audit: BPF prog-id=216 op=UNLOAD Jan 20 01:34:18.548000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffeb485fb90 a2=0 a3=6 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.548000 audit: BPF prog-id=217 op=LOAD Jan 20 01:34:18.548000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeb485f340 a2=94 a3=88 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.549000 audit: BPF prog-id=218 op=LOAD Jan 20 01:34:18.549000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffeb485f1c0 a2=94 a3=2 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.549000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.549000 audit: BPF prog-id=218 op=UNLOAD Jan 20 01:34:18.549000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffeb485f1f0 a2=0 a3=7ffeb485f2f0 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.549000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.549000 audit: BPF prog-id=217 op=UNLOAD Jan 20 01:34:18.549000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=147c5d10 a2=0 a3=c2ca5bacadef0e59 items=0 ppid=4044 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.549000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:34:18.565000 audit: BPF prog-id=199 op=UNLOAD Jan 20 01:34:18.565000 audit[4044]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0005d5480 a2=0 a3=0 items=0 ppid=4000 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.565000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 01:34:18.568164 containerd[1611]: time="2026-01-20T01:34:18.567886189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8dd7d667-ct8ff,Uid:c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1e46bbb1658b21eca69288cfabfd2428e0e46b5c0b6ca8b8ce3b3b447faf3a00\"" Jan 20 01:34:18.575549 containerd[1611]: time="2026-01-20T01:34:18.575338172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:18.579417 containerd[1611]: time="2026-01-20T01:34:18.578794243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:34:18.579417 containerd[1611]: time="2026-01-20T01:34:18.578875838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:18.579522 kubelet[2780]: E0120 01:34:18.579169 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:34:18.579522 kubelet[2780]: E0120 01:34:18.579278 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:34:18.580241 containerd[1611]: time="2026-01-20T01:34:18.580206713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:34:18.580664 kubelet[2780]: E0120 01:34:18.580023 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmwd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rs9sl_calico-system(93c423b9-f734-475b-aea9-f003af7097a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:18.582238 kubelet[2780]: E0120 01:34:18.582195 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:34:18.650792 containerd[1611]: time="2026-01-20T01:34:18.650665471Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:18.652656 containerd[1611]: time="2026-01-20T01:34:18.652431656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:34:18.652940 containerd[1611]: time="2026-01-20T01:34:18.652782862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:18.653232 kubelet[2780]: E0120 01:34:18.653171 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:18.653232 kubelet[2780]: E0120 01:34:18.653235 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:18.653398 kubelet[2780]: E0120 01:34:18.653341 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzvw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8dd7d667-ct8ff_calico-apiserver(c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:18.654931 kubelet[2780]: E0120 01:34:18.654780 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:34:18.665000 audit[4428]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4428 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:18.665000 audit[4428]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff33c92870 a2=0 a3=7fff33c9285c items=0 ppid=4044 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.665000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:18.667000 audit[4430]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4430 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:18.667000 audit[4430]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffd4ef8b50 a2=0 a3=7fffd4ef8b3c items=0 ppid=4044 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.667000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:18.678000 audit[4427]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4427 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:18.678000 audit[4427]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe29f06390 a2=0 a3=7ffe29f0637c items=0 ppid=4044 pid=4427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.678000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:18.697740 kubelet[2780]: E0120 01:34:18.697612 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:18.698279 containerd[1611]: time="2026-01-20T01:34:18.698173925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5zczl,Uid:7c57aba5-9af6-45bb-832d-1152db895836,Namespace:kube-system,Attempt:0,}" Jan 20 01:34:18.681000 audit[4429]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4429 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:18.681000 audit[4429]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffc8af9a8a0 a2=0 a3=7ffc8af9a88c items=0 ppid=4044 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.681000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:18.755000 audit[4453]: NETFILTER_CFG table=filter:125 family=2 entries=86 op=nft_register_chain pid=4453 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:18.755000 audit[4453]: SYSCALL arch=c000003e syscall=46 success=yes exit=50488 a0=3 a1=7ffcda22f350 a2=0 a3=7ffcda22f33c items=0 ppid=4044 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.755000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:18.898210 systemd-networkd[1506]: calie19846e2380: Link UP Jan 20 01:34:18.900399 systemd-networkd[1506]: calie19846e2380: Gained carrier Jan 20 01:34:18.922751 containerd[1611]: 2026-01-20 01:34:18.762 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--5zczl-eth0 coredns-668d6bf9bc- kube-system 7c57aba5-9af6-45bb-832d-1152db895836 823 0 2026-01-20 01:33:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-5zczl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie19846e2380 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" Namespace="kube-system" Pod="coredns-668d6bf9bc-5zczl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5zczl-" Jan 20 01:34:18.922751 containerd[1611]: 2026-01-20 01:34:18.763 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" Namespace="kube-system" Pod="coredns-668d6bf9bc-5zczl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5zczl-eth0" Jan 20 01:34:18.922751 containerd[1611]: 2026-01-20 01:34:18.803 [INFO][4456] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" HandleID="k8s-pod-network.70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" Workload="localhost-k8s-coredns--668d6bf9bc--5zczl-eth0" Jan 20 01:34:18.923509 containerd[1611]: 2026-01-20 01:34:18.803 [INFO][4456] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" HandleID="k8s-pod-network.70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" Workload="localhost-k8s-coredns--668d6bf9bc--5zczl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a55d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-5zczl", "timestamp":"2026-01-20 01:34:18.803688004 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:34:18.923509 containerd[1611]: 2026-01-20 01:34:18.804 [INFO][4456] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:34:18.923509 containerd[1611]: 2026-01-20 01:34:18.804 [INFO][4456] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:34:18.923509 containerd[1611]: 2026-01-20 01:34:18.804 [INFO][4456] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:34:18.923509 containerd[1611]: 2026-01-20 01:34:18.816 [INFO][4456] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" host="localhost" Jan 20 01:34:18.923509 containerd[1611]: 2026-01-20 01:34:18.832 [INFO][4456] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:34:18.923509 containerd[1611]: 2026-01-20 01:34:18.850 [INFO][4456] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:34:18.923509 containerd[1611]: 2026-01-20 01:34:18.859 [INFO][4456] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:18.923509 containerd[1611]: 2026-01-20 01:34:18.865 [INFO][4456] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:18.923509 containerd[1611]: 2026-01-20 01:34:18.865 [INFO][4456] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" host="localhost" Jan 20 01:34:18.923886 containerd[1611]: 2026-01-20 01:34:18.868 [INFO][4456] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3 Jan 20 01:34:18.923886 containerd[1611]: 2026-01-20 01:34:18.880 [INFO][4456] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" host="localhost" Jan 20 01:34:18.923886 containerd[1611]: 2026-01-20 01:34:18.890 [INFO][4456] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" host="localhost" Jan 20 01:34:18.923886 containerd[1611]: 2026-01-20 01:34:18.890 [INFO][4456] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" host="localhost" Jan 20 01:34:18.923886 containerd[1611]: 2026-01-20 01:34:18.890 [INFO][4456] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:34:18.923886 containerd[1611]: 2026-01-20 01:34:18.890 [INFO][4456] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" HandleID="k8s-pod-network.70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" Workload="localhost-k8s-coredns--668d6bf9bc--5zczl-eth0" Jan 20 01:34:18.924555 containerd[1611]: 2026-01-20 01:34:18.894 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" Namespace="kube-system" Pod="coredns-668d6bf9bc-5zczl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5zczl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5zczl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7c57aba5-9af6-45bb-832d-1152db895836", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-5zczl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie19846e2380", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:18.924757 containerd[1611]: 2026-01-20 01:34:18.894 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" Namespace="kube-system" Pod="coredns-668d6bf9bc-5zczl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5zczl-eth0" Jan 20 01:34:18.924757 containerd[1611]: 2026-01-20 01:34:18.894 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie19846e2380 ContainerID="70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" Namespace="kube-system" Pod="coredns-668d6bf9bc-5zczl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5zczl-eth0" Jan 20 01:34:18.924757 containerd[1611]: 2026-01-20 01:34:18.902 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" Namespace="kube-system" Pod="coredns-668d6bf9bc-5zczl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5zczl-eth0" Jan 20 01:34:18.924876 containerd[1611]: 2026-01-20 01:34:18.902 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" Namespace="kube-system" Pod="coredns-668d6bf9bc-5zczl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5zczl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5zczl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7c57aba5-9af6-45bb-832d-1152db895836", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3", Pod:"coredns-668d6bf9bc-5zczl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie19846e2380", MAC:"d2:18:5f:c6:e7:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:18.924876 containerd[1611]: 2026-01-20 01:34:18.918 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" Namespace="kube-system" Pod="coredns-668d6bf9bc-5zczl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5zczl-eth0" Jan 20 01:34:18.940000 audit[4475]: NETFILTER_CFG table=filter:126 family=2 entries=56 op=nft_register_chain pid=4475 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:18.940000 audit[4475]: SYSCALL arch=c000003e syscall=46 success=yes exit=27780 a0=3 a1=7ffcebc1ac70 a2=0 a3=7ffcebc1ac5c items=0 ppid=4044 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:18.940000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:18.964292 containerd[1611]: time="2026-01-20T01:34:18.964244765Z" level=info msg="connecting to shim 70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3" address="unix:///run/containerd/s/c23b22a70371f0ced867730c7bb14010444df0a475de94b6fa736a0ad7956ebd" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:34:19.027560 systemd[1]: Started cri-containerd-70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3.scope - libcontainer container 70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3. Jan 20 01:34:19.056000 audit: BPF prog-id=219 op=LOAD Jan 20 01:34:19.057000 audit: BPF prog-id=220 op=LOAD Jan 20 01:34:19.057000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4483 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643763316534653933666438303033316164346333333962363962 Jan 20 01:34:19.057000 audit: BPF prog-id=220 op=UNLOAD Jan 20 01:34:19.057000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4483 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643763316534653933666438303033316164346333333962363962 Jan 20 01:34:19.058000 audit: BPF prog-id=221 op=LOAD Jan 20 01:34:19.058000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4483 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643763316534653933666438303033316164346333333962363962 Jan 20 01:34:19.058000 audit: BPF prog-id=222 op=LOAD Jan 20 01:34:19.058000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4483 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643763316534653933666438303033316164346333333962363962 Jan 20 01:34:19.058000 audit: BPF prog-id=222 op=UNLOAD Jan 20 01:34:19.058000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4483 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643763316534653933666438303033316164346333333962363962 Jan 20 01:34:19.058000 audit: BPF prog-id=221 op=UNLOAD Jan 20 01:34:19.058000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4483 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643763316534653933666438303033316164346333333962363962 Jan 20 01:34:19.058000 audit: BPF prog-id=223 op=LOAD Jan 20 01:34:19.058000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4483 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730643763316534653933666438303033316164346333333962363962 Jan 20 01:34:19.061689 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:34:19.143198 containerd[1611]: time="2026-01-20T01:34:19.143034327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5zczl,Uid:7c57aba5-9af6-45bb-832d-1152db895836,Namespace:kube-system,Attempt:0,} returns sandbox id \"70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3\"" Jan 20 01:34:19.150147 kubelet[2780]: E0120 01:34:19.149451 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:19.163932 containerd[1611]: time="2026-01-20T01:34:19.162053417Z" level=info msg="CreateContainer within sandbox \"70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:34:19.202533 systemd-networkd[1506]: cali8ac9b45ef89: Gained IPv6LL Jan 20 01:34:19.244886 kubelet[2780]: E0120 01:34:19.239568 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:34:19.248688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358447435.mount: Deactivated successfully. Jan 20 01:34:19.260767 kubelet[2780]: E0120 01:34:19.260198 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:34:19.260767 kubelet[2780]: E0120 01:34:19.260518 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:34:19.269973 containerd[1611]: time="2026-01-20T01:34:19.269924651Z" level=info msg="Container 76198b87d5606eee9d7b36905c93d1a15de63aebbc942c2cd62d04cac0a82766: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:34:19.352917 containerd[1611]: time="2026-01-20T01:34:19.352423690Z" level=info msg="CreateContainer within sandbox \"70d7c1e4e93fd80031ad4c339b69bb98cf92d3e3f708a0770aa51c01cc5112b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76198b87d5606eee9d7b36905c93d1a15de63aebbc942c2cd62d04cac0a82766\"" Jan 20 01:34:19.373533 containerd[1611]: time="2026-01-20T01:34:19.372144795Z" level=info msg="StartContainer for \"76198b87d5606eee9d7b36905c93d1a15de63aebbc942c2cd62d04cac0a82766\"" Jan 20 01:34:19.383026 containerd[1611]: time="2026-01-20T01:34:19.381789571Z" level=info msg="connecting to shim 76198b87d5606eee9d7b36905c93d1a15de63aebbc942c2cd62d04cac0a82766" address="unix:///run/containerd/s/c23b22a70371f0ced867730c7bb14010444df0a475de94b6fa736a0ad7956ebd" protocol=ttrpc version=3 Jan 20 01:34:19.393000 audit[4523]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:19.393000 audit[4523]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffde8eb6e40 a2=0 a3=7ffde8eb6e2c items=0 ppid=2936 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:19.409000 audit[4523]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:19.409000 audit[4523]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffde8eb6e40 a2=0 a3=0 items=0 ppid=2936 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:19.465000 audit[4536]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:19.465000 audit[4536]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdc0207090 a2=0 a3=7ffdc020707c items=0 ppid=2936 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.465000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:19.473340 systemd[1]: Started cri-containerd-76198b87d5606eee9d7b36905c93d1a15de63aebbc942c2cd62d04cac0a82766.scope - libcontainer container 76198b87d5606eee9d7b36905c93d1a15de63aebbc942c2cd62d04cac0a82766. Jan 20 01:34:19.473000 audit[4536]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:19.473000 audit[4536]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdc0207090 a2=0 a3=0 items=0 ppid=2936 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:19.524000 audit: BPF prog-id=224 op=LOAD Jan 20 01:34:19.528000 audit: BPF prog-id=225 op=LOAD Jan 20 01:34:19.528000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4483 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313938623837643536303665656539643762333639303563393364 Jan 20 01:34:19.528000 audit: BPF prog-id=225 op=UNLOAD Jan 20 01:34:19.528000 audit[4524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4483 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313938623837643536303665656539643762333639303563393364 Jan 20 01:34:19.528000 audit: BPF prog-id=226 op=LOAD Jan 20 01:34:19.528000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4483 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313938623837643536303665656539643762333639303563393364 Jan 20 01:34:19.528000 audit: BPF prog-id=227 op=LOAD Jan 20 01:34:19.528000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4483 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313938623837643536303665656539643762333639303563393364 Jan 20 01:34:19.528000 audit: BPF prog-id=227 op=UNLOAD Jan 20 01:34:19.528000 audit[4524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4483 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313938623837643536303665656539643762333639303563393364 Jan 20 01:34:19.528000 audit: BPF prog-id=226 op=UNLOAD Jan 20 01:34:19.528000 audit[4524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4483 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313938623837643536303665656539643762333639303563393364 Jan 20 01:34:19.528000 audit: BPF prog-id=228 op=LOAD Jan 20 01:34:19.528000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4483 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:19.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313938623837643536303665656539643762333639303563393364 Jan 20 01:34:19.575363 systemd-networkd[1506]: vxlan.calico: Gained IPv6LL Jan 20 01:34:19.602620 containerd[1611]: time="2026-01-20T01:34:19.602542831Z" level=info msg="StartContainer for \"76198b87d5606eee9d7b36905c93d1a15de63aebbc942c2cd62d04cac0a82766\" returns successfully" Jan 20 01:34:19.698138 kubelet[2780]: E0120 01:34:19.697399 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:19.698475 containerd[1611]: time="2026-01-20T01:34:19.698384151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phdz7,Uid:164d51f9-eed6-48ef-9188-a78d4106afb9,Namespace:calico-system,Attempt:0,}" Jan 20 01:34:19.700429 containerd[1611]: time="2026-01-20T01:34:19.700396055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw8rq,Uid:aab500f2-8a22-416d-a3ca-7e80812d5776,Namespace:kube-system,Attempt:0,}" Jan 20 01:34:19.832511 systemd-networkd[1506]: calif5ce8e97758: Gained IPv6LL Jan 20 01:34:20.155247 systemd-networkd[1506]: cali776f4596313: Link UP Jan 20 01:34:20.160491 systemd-networkd[1506]: cali776f4596313: Gained carrier Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:19.932 [INFO][4554] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--phdz7-eth0 csi-node-driver- calico-system 164d51f9-eed6-48ef-9188-a78d4106afb9 720 0 2026-01-20 01:33:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-phdz7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali776f4596313 [] [] }} ContainerID="bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" Namespace="calico-system" Pod="csi-node-driver-phdz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--phdz7-" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:19.933 [INFO][4554] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" Namespace="calico-system" Pod="csi-node-driver-phdz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--phdz7-eth0" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.020 [INFO][4586] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" HandleID="k8s-pod-network.bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" Workload="localhost-k8s-csi--node--driver--phdz7-eth0" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.021 [INFO][4586] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" HandleID="k8s-pod-network.bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" Workload="localhost-k8s-csi--node--driver--phdz7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-phdz7", "timestamp":"2026-01-20 01:34:20.020773332 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.021 [INFO][4586] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.021 [INFO][4586] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.021 [INFO][4586] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.041 [INFO][4586] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" host="localhost" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.061 [INFO][4586] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.077 [INFO][4586] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.086 [INFO][4586] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.094 [INFO][4586] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.095 [INFO][4586] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" host="localhost" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.101 [INFO][4586] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20 Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.113 [INFO][4586] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" host="localhost" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.130 [INFO][4586] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" host="localhost" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.130 [INFO][4586] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" host="localhost" Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.130 [INFO][4586] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:34:20.213955 containerd[1611]: 2026-01-20 01:34:20.130 [INFO][4586] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" HandleID="k8s-pod-network.bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" Workload="localhost-k8s-csi--node--driver--phdz7-eth0" Jan 20 01:34:20.215285 containerd[1611]: 2026-01-20 01:34:20.142 [INFO][4554] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" Namespace="calico-system" Pod="csi-node-driver-phdz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--phdz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--phdz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"164d51f9-eed6-48ef-9188-a78d4106afb9", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-phdz7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali776f4596313", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:20.215285 containerd[1611]: 2026-01-20 01:34:20.144 [INFO][4554] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" Namespace="calico-system" Pod="csi-node-driver-phdz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--phdz7-eth0" Jan 20 01:34:20.215285 containerd[1611]: 2026-01-20 01:34:20.144 [INFO][4554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali776f4596313 ContainerID="bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" Namespace="calico-system" Pod="csi-node-driver-phdz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--phdz7-eth0" Jan 20 01:34:20.215285 containerd[1611]: 2026-01-20 01:34:20.162 [INFO][4554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" Namespace="calico-system" Pod="csi-node-driver-phdz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--phdz7-eth0" Jan 20 01:34:20.215285 containerd[1611]: 2026-01-20 01:34:20.166 [INFO][4554] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" Namespace="calico-system" Pod="csi-node-driver-phdz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--phdz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--phdz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"164d51f9-eed6-48ef-9188-a78d4106afb9", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20", Pod:"csi-node-driver-phdz7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali776f4596313", MAC:"16:56:10:1e:22:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:20.215285 containerd[1611]: 2026-01-20 01:34:20.204 [INFO][4554] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" Namespace="calico-system" Pod="csi-node-driver-phdz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--phdz7-eth0" Jan 20 01:34:20.254000 audit[4610]: NETFILTER_CFG table=filter:131 family=2 entries=44 op=nft_register_chain pid=4610 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:20.254000 audit[4610]: SYSCALL arch=c000003e syscall=46 success=yes exit=21936 a0=3 a1=7ffc9e6366b0 a2=0 a3=7ffc9e63669c items=0 ppid=4044 pid=4610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.254000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:20.287317 kubelet[2780]: E0120 01:34:20.286556 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:20.291767 kubelet[2780]: E0120 01:34:20.291616 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:34:20.291918 kubelet[2780]: E0120 01:34:20.291783 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:34:20.302667 containerd[1611]: time="2026-01-20T01:34:20.302531455Z" level=info msg="connecting to shim bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20" address="unix:///run/containerd/s/3c6b5b08c67418ddbaf4882492b53aa3cbb403d7ca2909633b4f2c808f5d71ac" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:34:20.333909 systemd-networkd[1506]: califdea1e6ea65: Link UP Jan 20 01:34:20.338031 systemd-networkd[1506]: califdea1e6ea65: Gained carrier Jan 20 01:34:20.393500 systemd[1]: Started cri-containerd-bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20.scope - libcontainer container bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20. Jan 20 01:34:20.410272 kubelet[2780]: I0120 01:34:20.409696 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5zczl" podStartSLOduration=37.409671603 podStartE2EDuration="37.409671603s" podCreationTimestamp="2026-01-20 01:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:34:20.348064863 +0000 UTC m=+42.792575671" watchObservedRunningTime="2026-01-20 01:34:20.409671603 +0000 UTC m=+42.854182401" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:19.925 [INFO][4553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0 coredns-668d6bf9bc- kube-system aab500f2-8a22-416d-a3ca-7e80812d5776 829 0 2026-01-20 01:33:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-jw8rq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califdea1e6ea65 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw8rq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw8rq-" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:19.926 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw8rq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.054 [INFO][4584] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" HandleID="k8s-pod-network.96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" Workload="localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.055 [INFO][4584] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" HandleID="k8s-pod-network.96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" Workload="localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001396e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-jw8rq", "timestamp":"2026-01-20 01:34:20.054446952 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.055 [INFO][4584] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.131 [INFO][4584] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.131 [INFO][4584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.185 [INFO][4584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" host="localhost" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.220 [INFO][4584] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.235 [INFO][4584] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.239 [INFO][4584] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.249 [INFO][4584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.250 [INFO][4584] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" host="localhost" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.260 [INFO][4584] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.292 [INFO][4584] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" host="localhost" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.309 [INFO][4584] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" host="localhost" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.310 [INFO][4584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" host="localhost" Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.311 [INFO][4584] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:34:20.427392 containerd[1611]: 2026-01-20 01:34:20.312 [INFO][4584] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" HandleID="k8s-pod-network.96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" Workload="localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0" Jan 20 01:34:20.429372 containerd[1611]: 2026-01-20 01:34:20.319 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw8rq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aab500f2-8a22-416d-a3ca-7e80812d5776", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-jw8rq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califdea1e6ea65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:20.429372 containerd[1611]: 2026-01-20 01:34:20.319 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw8rq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0" Jan 20 01:34:20.429372 containerd[1611]: 2026-01-20 01:34:20.319 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califdea1e6ea65 ContainerID="96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw8rq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0" Jan 20 01:34:20.429372 containerd[1611]: 2026-01-20 01:34:20.337 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw8rq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0" Jan 20 01:34:20.429372 containerd[1611]: 2026-01-20 01:34:20.344 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw8rq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aab500f2-8a22-416d-a3ca-7e80812d5776", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b", Pod:"coredns-668d6bf9bc-jw8rq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califdea1e6ea65", MAC:"d2:e5:a9:20:a3:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:20.429372 containerd[1611]: 2026-01-20 01:34:20.414 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw8rq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw8rq-eth0" Jan 20 01:34:20.440000 audit[4657]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=4657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:20.440000 audit[4657]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffd7237390 a2=0 a3=7fffd723737c items=0 ppid=2936 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:20.441000 audit: BPF prog-id=229 op=LOAD Jan 20 01:34:20.443000 audit: BPF prog-id=230 op=LOAD Jan 20 01:34:20.443000 audit[4631]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4620 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376236326230663731663539373463343130653435323365386536 Jan 20 01:34:20.443000 audit: BPF prog-id=230 op=UNLOAD Jan 20 01:34:20.443000 audit[4631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4620 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376236326230663731663539373463343130653435323365386536 Jan 20 01:34:20.444000 audit: BPF prog-id=231 op=LOAD Jan 20 01:34:20.444000 audit[4631]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4620 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376236326230663731663539373463343130653435323365386536 Jan 20 01:34:20.444000 audit: BPF prog-id=232 op=LOAD Jan 20 01:34:20.444000 audit[4631]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4620 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376236326230663731663539373463343130653435323365386536 Jan 20 01:34:20.444000 audit: BPF prog-id=232 op=UNLOAD Jan 20 01:34:20.444000 audit[4631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4620 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376236326230663731663539373463343130653435323365386536 Jan 20 01:34:20.444000 audit: BPF prog-id=231 op=UNLOAD Jan 20 01:34:20.444000 audit[4631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4620 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376236326230663731663539373463343130653435323365386536 Jan 20 01:34:20.444000 audit: BPF prog-id=233 op=LOAD Jan 20 01:34:20.444000 audit[4631]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4620 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376236326230663731663539373463343130653435323365386536 Jan 20 01:34:20.447000 audit[4657]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=4657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:20.447000 audit[4657]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffd7237390 a2=0 a3=0 items=0 ppid=2936 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:20.450366 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:34:20.517857 containerd[1611]: time="2026-01-20T01:34:20.517660374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phdz7,Uid:164d51f9-eed6-48ef-9188-a78d4106afb9,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd7b62b0f71f5974c410e4523e8e6415025a304c44c2d55a64708d072ce60b20\"" Jan 20 01:34:20.521376 containerd[1611]: time="2026-01-20T01:34:20.521315489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:34:20.525197 kernel: kauditd_printk_skb: 368 callbacks suppressed Jan 20 01:34:20.525292 kernel: audit: type=1325 audit(1768872860.520:683): table=filter:134 family=2 entries=50 op=nft_register_chain pid=4670 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:20.520000 audit[4670]: NETFILTER_CFG table=filter:134 family=2 entries=50 op=nft_register_chain pid=4670 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:20.525457 containerd[1611]: time="2026-01-20T01:34:20.522607325Z" level=info msg="connecting to shim 96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b" address="unix:///run/containerd/s/0111b1c16fed58bc8f5d50e328ad62af256cc42298bf9d1d1370c6956608fcd8" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:34:20.520000 audit[4670]: SYSCALL arch=c000003e syscall=46 success=yes exit=24368 a0=3 a1=7ffc5a6b56d0 a2=0 a3=7ffc5a6b56bc items=0 ppid=4044 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.550160 kernel: audit: type=1300 audit(1768872860.520:683): arch=c000003e syscall=46 success=yes exit=24368 a0=3 a1=7ffc5a6b56d0 a2=0 a3=7ffc5a6b56bc items=0 ppid=4044 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.520000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:20.559187 kernel: audit: type=1327 audit(1768872860.520:683): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:20.588522 systemd[1]: Started cri-containerd-96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b.scope - libcontainer container 96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b. Jan 20 01:34:20.603017 containerd[1611]: time="2026-01-20T01:34:20.602884298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:20.605873 containerd[1611]: time="2026-01-20T01:34:20.604882009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:34:20.605873 containerd[1611]: time="2026-01-20T01:34:20.604955146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:20.606177 kubelet[2780]: E0120 01:34:20.605415 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:34:20.606177 kubelet[2780]: E0120 01:34:20.605474 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:34:20.606177 kubelet[2780]: E0120 01:34:20.605670 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:20.608147 containerd[1611]: time="2026-01-20T01:34:20.607846834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:34:20.614000 audit: BPF prog-id=234 op=LOAD Jan 20 01:34:20.618000 audit: BPF prog-id=235 op=LOAD Jan 20 01:34:20.624041 kernel: audit: type=1334 audit(1768872860.614:684): prog-id=234 op=LOAD Jan 20 01:34:20.624175 kernel: audit: type=1334 audit(1768872860.618:685): prog-id=235 op=LOAD Jan 20 01:34:20.618000 audit[4690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4678 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.624522 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:34:20.638136 kernel: audit: type=1300 audit(1768872860.618:685): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4678 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936343936623433646463626263306662663365656664313432353637 Jan 20 01:34:20.656184 kernel: audit: type=1327 audit(1768872860.618:685): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936343936623433646463626263306662663365656664313432353637 Jan 20 01:34:20.656331 kernel: audit: type=1334 audit(1768872860.618:686): prog-id=235 op=UNLOAD Jan 20 01:34:20.618000 audit: BPF prog-id=235 op=UNLOAD Jan 20 01:34:20.618000 audit[4690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4678 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.672647 kernel: audit: type=1300 audit(1768872860.618:686): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4678 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936343936623433646463626263306662663365656664313432353637 Jan 20 01:34:20.618000 audit: BPF prog-id=236 op=LOAD Jan 20 01:34:20.618000 audit[4690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4678 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936343936623433646463626263306662663365656664313432353637 Jan 20 01:34:20.618000 audit: BPF prog-id=237 op=LOAD Jan 20 01:34:20.618000 audit[4690]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4678 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.693578 kernel: audit: type=1327 audit(1768872860.618:686): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936343936623433646463626263306662663365656664313432353637 Jan 20 01:34:20.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936343936623433646463626263306662663365656664313432353637 Jan 20 01:34:20.618000 audit: BPF prog-id=237 op=UNLOAD Jan 20 01:34:20.618000 audit[4690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4678 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936343936623433646463626263306662663365656664313432353637 Jan 20 01:34:20.618000 audit: BPF prog-id=236 op=UNLOAD Jan 20 01:34:20.618000 audit[4690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4678 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936343936623433646463626263306662663365656664313432353637 Jan 20 01:34:20.618000 audit: BPF prog-id=238 op=LOAD Jan 20 01:34:20.618000 audit[4690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4678 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936343936623433646463626263306662663365656664313432353637 Jan 20 01:34:20.694350 containerd[1611]: time="2026-01-20T01:34:20.692993568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:20.699292 containerd[1611]: time="2026-01-20T01:34:20.699052971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-947d9dcc-bp5fh,Uid:e535c75b-4142-4085-8d9d-2841894e5fe8,Namespace:calico-system,Attempt:0,}" Jan 20 01:34:20.702689 containerd[1611]: time="2026-01-20T01:34:20.700990315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:34:20.703467 containerd[1611]: time="2026-01-20T01:34:20.703341818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:20.703467 containerd[1611]: time="2026-01-20T01:34:20.703431049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd7bff465-4rkgx,Uid:d9baf707-371f-47e4-9f67-1785bd6ba68b,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:34:20.705143 kubelet[2780]: E0120 01:34:20.705033 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:34:20.706506 kubelet[2780]: E0120 01:34:20.705270 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:34:20.706506 kubelet[2780]: E0120 01:34:20.705435 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:20.709630 kubelet[2780]: E0120 01:34:20.709289 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:34:20.745235 containerd[1611]: time="2026-01-20T01:34:20.743784798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw8rq,Uid:aab500f2-8a22-416d-a3ca-7e80812d5776,Namespace:kube-system,Attempt:0,} returns sandbox id \"96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b\"" Jan 20 01:34:20.745451 kubelet[2780]: E0120 01:34:20.744551 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:20.751416 containerd[1611]: time="2026-01-20T01:34:20.751367560Z" level=info msg="CreateContainer within sandbox \"96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:34:20.795287 containerd[1611]: time="2026-01-20T01:34:20.795238754Z" level=info msg="Container 8f8018b5c6ec6893178c6ff243e790e8ece063c4496e23ab581a7e5eab5580a3: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:34:20.819308 containerd[1611]: time="2026-01-20T01:34:20.818966162Z" level=info msg="CreateContainer within sandbox \"96496b43ddcbbc0fbf3eefd142567463dc181b8a2270ae3f04d0ad36048f368b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f8018b5c6ec6893178c6ff243e790e8ece063c4496e23ab581a7e5eab5580a3\"" Jan 20 01:34:20.824158 containerd[1611]: time="2026-01-20T01:34:20.824070872Z" level=info msg="StartContainer for \"8f8018b5c6ec6893178c6ff243e790e8ece063c4496e23ab581a7e5eab5580a3\"" Jan 20 01:34:20.825454 containerd[1611]: time="2026-01-20T01:34:20.825426834Z" level=info msg="connecting to shim 8f8018b5c6ec6893178c6ff243e790e8ece063c4496e23ab581a7e5eab5580a3" address="unix:///run/containerd/s/0111b1c16fed58bc8f5d50e328ad62af256cc42298bf9d1d1370c6956608fcd8" protocol=ttrpc version=3 Jan 20 01:34:20.858151 systemd-networkd[1506]: calie19846e2380: Gained IPv6LL Jan 20 01:34:20.899847 systemd[1]: Started cri-containerd-8f8018b5c6ec6893178c6ff243e790e8ece063c4496e23ab581a7e5eab5580a3.scope - libcontainer container 8f8018b5c6ec6893178c6ff243e790e8ece063c4496e23ab581a7e5eab5580a3. Jan 20 01:34:20.945000 audit: BPF prog-id=239 op=LOAD Jan 20 01:34:20.946000 audit: BPF prog-id=240 op=LOAD Jan 20 01:34:20.946000 audit[4744]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4678 pid=4744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383031386235633665633638393331373863366666323433653739 Jan 20 01:34:20.946000 audit: BPF prog-id=240 op=UNLOAD Jan 20 01:34:20.946000 audit[4744]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4678 pid=4744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383031386235633665633638393331373863366666323433653739 Jan 20 01:34:20.946000 audit: BPF prog-id=241 op=LOAD Jan 20 01:34:20.946000 audit[4744]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4678 pid=4744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383031386235633665633638393331373863366666323433653739 Jan 20 01:34:20.946000 audit: BPF prog-id=242 op=LOAD Jan 20 01:34:20.946000 audit[4744]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4678 pid=4744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383031386235633665633638393331373863366666323433653739 Jan 20 01:34:20.946000 audit: BPF prog-id=242 op=UNLOAD Jan 20 01:34:20.946000 audit[4744]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4678 pid=4744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383031386235633665633638393331373863366666323433653739 Jan 20 01:34:20.946000 audit: BPF prog-id=241 op=UNLOAD Jan 20 01:34:20.946000 audit[4744]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4678 pid=4744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383031386235633665633638393331373863366666323433653739 Jan 20 01:34:20.946000 audit: BPF prog-id=243 op=LOAD Jan 20 01:34:20.946000 audit[4744]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4678 pid=4744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:20.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866383031386235633665633638393331373863366666323433653739 Jan 20 01:34:21.077443 containerd[1611]: time="2026-01-20T01:34:21.077383321Z" level=info msg="StartContainer for \"8f8018b5c6ec6893178c6ff243e790e8ece063c4496e23ab581a7e5eab5580a3\" returns successfully" Jan 20 01:34:21.201462 systemd-networkd[1506]: cali03df2440193: Link UP Jan 20 01:34:21.204937 systemd-networkd[1506]: cali03df2440193: Gained carrier Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:20.893 [INFO][4731] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0 calico-kube-controllers-947d9dcc- calico-system e535c75b-4142-4085-8d9d-2841894e5fe8 820 0 2026-01-20 01:33:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:947d9dcc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-947d9dcc-bp5fh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali03df2440193 [] [] }} ContainerID="e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" Namespace="calico-system" Pod="calico-kube-controllers-947d9dcc-bp5fh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:20.893 [INFO][4731] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" Namespace="calico-system" Pod="calico-kube-controllers-947d9dcc-bp5fh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.030 [INFO][4765] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" HandleID="k8s-pod-network.e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" Workload="localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.031 [INFO][4765] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" HandleID="k8s-pod-network.e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" Workload="localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ab850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-947d9dcc-bp5fh", "timestamp":"2026-01-20 01:34:21.030792197 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.031 [INFO][4765] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.031 [INFO][4765] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.031 [INFO][4765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.055 [INFO][4765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" host="localhost" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.092 [INFO][4765] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.107 [INFO][4765] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.122 [INFO][4765] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.128 [INFO][4765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.130 [INFO][4765] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" host="localhost" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.135 [INFO][4765] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884 Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.150 [INFO][4765] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" host="localhost" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.171 [INFO][4765] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" host="localhost" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.171 [INFO][4765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" host="localhost" Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.171 [INFO][4765] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:34:21.242825 containerd[1611]: 2026-01-20 01:34:21.172 [INFO][4765] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" HandleID="k8s-pod-network.e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" Workload="localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0" Jan 20 01:34:21.244499 containerd[1611]: 2026-01-20 01:34:21.185 [INFO][4731] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" Namespace="calico-system" Pod="calico-kube-controllers-947d9dcc-bp5fh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0", GenerateName:"calico-kube-controllers-947d9dcc-", Namespace:"calico-system", SelfLink:"", UID:"e535c75b-4142-4085-8d9d-2841894e5fe8", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"947d9dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-947d9dcc-bp5fh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali03df2440193", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:21.244499 containerd[1611]: 2026-01-20 01:34:21.185 [INFO][4731] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" Namespace="calico-system" Pod="calico-kube-controllers-947d9dcc-bp5fh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0" Jan 20 01:34:21.244499 containerd[1611]: 2026-01-20 01:34:21.185 [INFO][4731] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03df2440193 ContainerID="e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" Namespace="calico-system" Pod="calico-kube-controllers-947d9dcc-bp5fh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0" Jan 20 01:34:21.244499 containerd[1611]: 2026-01-20 01:34:21.205 [INFO][4731] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" Namespace="calico-system" Pod="calico-kube-controllers-947d9dcc-bp5fh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0" Jan 20 01:34:21.244499 containerd[1611]: 2026-01-20 01:34:21.205 [INFO][4731] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" Namespace="calico-system" Pod="calico-kube-controllers-947d9dcc-bp5fh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0", GenerateName:"calico-kube-controllers-947d9dcc-", Namespace:"calico-system", SelfLink:"", UID:"e535c75b-4142-4085-8d9d-2841894e5fe8", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"947d9dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884", Pod:"calico-kube-controllers-947d9dcc-bp5fh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali03df2440193", MAC:"ae:09:a5:91:fd:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:21.244499 containerd[1611]: 2026-01-20 01:34:21.233 [INFO][4731] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" Namespace="calico-system" Pod="calico-kube-controllers-947d9dcc-bp5fh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--947d9dcc--bp5fh-eth0" Jan 20 01:34:21.294415 kubelet[2780]: E0120 01:34:21.294054 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:21.300006 kubelet[2780]: E0120 01:34:21.299890 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:21.302537 kubelet[2780]: E0120 01:34:21.302422 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:34:21.311000 audit[4803]: NETFILTER_CFG table=filter:135 family=2 entries=48 op=nft_register_chain pid=4803 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:21.311000 audit[4803]: SYSCALL arch=c000003e syscall=46 success=yes exit=23108 a0=3 a1=7ffdd9f355c0 a2=0 a3=7ffdd9f355ac items=0 ppid=4044 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.311000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:21.328586 containerd[1611]: time="2026-01-20T01:34:21.328349668Z" level=info msg="connecting to shim e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884" address="unix:///run/containerd/s/83da0d47de5a38f50ae9d3b448639c9fd33dc22d7d169aecc4c532dd228ef8f1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:34:21.427000 audit[4831]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:21.427000 audit[4831]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd7d0cd6d0 a2=0 a3=7ffd7d0cd6bc items=0 ppid=2936 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.427000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:21.436888 systemd-networkd[1506]: cali2ff82d7f0e3: Link UP Jan 20 01:34:21.439875 systemd-networkd[1506]: cali2ff82d7f0e3: Gained carrier Jan 20 01:34:21.447000 audit[4831]: NETFILTER_CFG table=nat:137 family=2 entries=35 op=nft_register_chain pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:21.447000 audit[4831]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd7d0cd6d0 a2=0 a3=7ffd7d0cd6bc items=0 ppid=2936 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:21.453401 kubelet[2780]: I0120 01:34:21.449015 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jw8rq" podStartSLOduration=38.448989759 podStartE2EDuration="38.448989759s" podCreationTimestamp="2026-01-20 01:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:34:21.364346004 +0000 UTC m=+43.808856882" watchObservedRunningTime="2026-01-20 01:34:21.448989759 +0000 UTC m=+43.893500556" Jan 20 01:34:21.464865 systemd[1]: Started cri-containerd-e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884.scope - libcontainer container e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884. Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:20.879 [INFO][4715] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0 calico-apiserver-dd7bff465- calico-apiserver d9baf707-371f-47e4-9f67-1785bd6ba68b 831 0 2026-01-20 01:33:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd7bff465 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dd7bff465-4rkgx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ff82d7f0e3 [] [] }} ContainerID="3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" Namespace="calico-apiserver" Pod="calico-apiserver-dd7bff465-4rkgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd7bff465--4rkgx-" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:20.889 [INFO][4715] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" Namespace="calico-apiserver" Pod="calico-apiserver-dd7bff465-4rkgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.079 [INFO][4767] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" HandleID="k8s-pod-network.3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" Workload="localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.080 [INFO][4767] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" HandleID="k8s-pod-network.3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" Workload="localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038c4e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dd7bff465-4rkgx", "timestamp":"2026-01-20 01:34:21.079918395 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.080 [INFO][4767] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.175 [INFO][4767] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.175 [INFO][4767] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.211 [INFO][4767] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" host="localhost" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.234 [INFO][4767] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.263 [INFO][4767] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.269 [INFO][4767] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.281 [INFO][4767] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.282 [INFO][4767] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" host="localhost" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.289 [INFO][4767] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870 Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.343 [INFO][4767] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" host="localhost" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.375 [INFO][4767] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" host="localhost" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.378 [INFO][4767] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" host="localhost" Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.379 [INFO][4767] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:34:21.509191 containerd[1611]: 2026-01-20 01:34:21.379 [INFO][4767] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" HandleID="k8s-pod-network.3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" Workload="localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0" Jan 20 01:34:21.510965 containerd[1611]: 2026-01-20 01:34:21.406 [INFO][4715] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" Namespace="calico-apiserver" Pod="calico-apiserver-dd7bff465-4rkgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0", GenerateName:"calico-apiserver-dd7bff465-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9baf707-371f-47e4-9f67-1785bd6ba68b", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd7bff465", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dd7bff465-4rkgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ff82d7f0e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:21.510965 containerd[1611]: 2026-01-20 01:34:21.407 [INFO][4715] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" Namespace="calico-apiserver" Pod="calico-apiserver-dd7bff465-4rkgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0" Jan 20 01:34:21.510965 containerd[1611]: 2026-01-20 01:34:21.407 [INFO][4715] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ff82d7f0e3 ContainerID="3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" Namespace="calico-apiserver" Pod="calico-apiserver-dd7bff465-4rkgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0" Jan 20 01:34:21.510965 containerd[1611]: 2026-01-20 01:34:21.439 [INFO][4715] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" Namespace="calico-apiserver" Pod="calico-apiserver-dd7bff465-4rkgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0" Jan 20 01:34:21.510965 containerd[1611]: 2026-01-20 01:34:21.443 [INFO][4715] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" Namespace="calico-apiserver" Pod="calico-apiserver-dd7bff465-4rkgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0", GenerateName:"calico-apiserver-dd7bff465-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9baf707-371f-47e4-9f67-1785bd6ba68b", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd7bff465", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870", Pod:"calico-apiserver-dd7bff465-4rkgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ff82d7f0e3", MAC:"26:1c:f6:fb:7c:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:21.510965 containerd[1611]: 2026-01-20 01:34:21.492 [INFO][4715] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" Namespace="calico-apiserver" Pod="calico-apiserver-dd7bff465-4rkgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd7bff465--4rkgx-eth0" Jan 20 01:34:21.609000 audit: BPF prog-id=244 op=LOAD Jan 20 01:34:21.612276 containerd[1611]: time="2026-01-20T01:34:21.606557754Z" level=info msg="connecting to shim 3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870" address="unix:///run/containerd/s/8e96041b9d01ebc00280e5cf2b8f2afce61e1bd892aebb8125cbf4cf43e319da" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:34:21.610000 audit: BPF prog-id=245 op=LOAD Jan 20 01:34:21.610000 audit[4824]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4812 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333635363161643136643037323931623934323665376463336564 Jan 20 01:34:21.610000 audit: BPF prog-id=245 op=UNLOAD Jan 20 01:34:21.610000 audit[4824]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4812 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333635363161643136643037323931623934323665376463336564 Jan 20 01:34:21.610000 audit: BPF prog-id=246 op=LOAD Jan 20 01:34:21.610000 audit[4824]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4812 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333635363161643136643037323931623934323665376463336564 Jan 20 01:34:21.610000 audit: BPF prog-id=247 op=LOAD Jan 20 01:34:21.610000 audit[4824]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4812 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333635363161643136643037323931623934323665376463336564 Jan 20 01:34:21.610000 audit: BPF prog-id=247 op=UNLOAD Jan 20 01:34:21.610000 audit[4824]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4812 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333635363161643136643037323931623934323665376463336564 Jan 20 01:34:21.610000 audit: BPF prog-id=246 op=UNLOAD Jan 20 01:34:21.610000 audit[4824]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4812 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333635363161643136643037323931623934323665376463336564 Jan 20 01:34:21.610000 audit: BPF prog-id=248 op=LOAD Jan 20 01:34:21.610000 audit[4824]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4812 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333635363161643136643037323931623934323665376463336564 Jan 20 01:34:21.621280 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:34:21.669000 audit[4873]: NETFILTER_CFG table=filter:138 family=2 entries=53 op=nft_register_chain pid=4873 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:21.669000 audit[4873]: SYSCALL arch=c000003e syscall=46 success=yes exit=26608 a0=3 a1=7ffd5d29cec0 a2=0 a3=7ffd5d29ceac items=0 ppid=4044 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.669000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:21.700152 containerd[1611]: time="2026-01-20T01:34:21.699878454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8dd7d667-prz7k,Uid:c55441d4-7803-4009-82ca-ee9ec6a88be8,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:34:21.702153 systemd[1]: Started cri-containerd-3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870.scope - libcontainer container 3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870. Jan 20 01:34:21.755681 containerd[1611]: time="2026-01-20T01:34:21.755552593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-947d9dcc-bp5fh,Uid:e535c75b-4142-4085-8d9d-2841894e5fe8,Namespace:calico-system,Attempt:0,} returns sandbox id \"e036561ad16d07291b9426e7dc3ed34d9ef82f67ed7861039b59ae08c772c884\"" Jan 20 01:34:21.761255 containerd[1611]: time="2026-01-20T01:34:21.761180170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:34:21.777000 audit: BPF prog-id=249 op=LOAD Jan 20 01:34:21.783000 audit: BPF prog-id=250 op=LOAD Jan 20 01:34:21.783000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4858 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343539633961643864373661636131623566623761313034633334 Jan 20 01:34:21.784000 audit: BPF prog-id=250 op=UNLOAD Jan 20 01:34:21.784000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4858 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343539633961643864373661636131623566623761313034633334 Jan 20 01:34:21.785000 audit: BPF prog-id=251 op=LOAD Jan 20 01:34:21.785000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4858 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343539633961643864373661636131623566623761313034633334 Jan 20 01:34:21.785000 audit: BPF prog-id=252 op=LOAD Jan 20 01:34:21.785000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4858 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343539633961643864373661636131623566623761313034633334 Jan 20 01:34:21.788000 audit: BPF prog-id=252 op=UNLOAD Jan 20 01:34:21.788000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4858 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343539633961643864373661636131623566623761313034633334 Jan 20 01:34:21.788000 audit: BPF prog-id=251 op=UNLOAD Jan 20 01:34:21.788000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4858 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343539633961643864373661636131623566623761313034633334 Jan 20 01:34:21.788000 audit: BPF prog-id=253 op=LOAD Jan 20 01:34:21.788000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4858 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:21.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343539633961643864373661636131623566623761313034633334 Jan 20 01:34:21.794971 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:34:21.860195 containerd[1611]: time="2026-01-20T01:34:21.859931506Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:21.863510 containerd[1611]: time="2026-01-20T01:34:21.863460774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:34:21.865294 containerd[1611]: time="2026-01-20T01:34:21.863573396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:21.868175 kubelet[2780]: E0120 01:34:21.867269 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:34:21.868175 kubelet[2780]: E0120 01:34:21.867345 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:34:21.868175 kubelet[2780]: E0120 01:34:21.867537 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ffsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-947d9dcc-bp5fh_calico-system(e535c75b-4142-4085-8d9d-2841894e5fe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:21.869567 kubelet[2780]: E0120 01:34:21.868939 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:34:21.925540 containerd[1611]: time="2026-01-20T01:34:21.925441002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd7bff465-4rkgx,Uid:d9baf707-371f-47e4-9f67-1785bd6ba68b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3a459c9ad8d76aca1b5fb7a104c347423524b82bcf78b5f8da27cdda57278870\"" Jan 20 01:34:21.929506 containerd[1611]: time="2026-01-20T01:34:21.929400449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:34:22.017053 containerd[1611]: time="2026-01-20T01:34:22.016842688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:22.021175 containerd[1611]: time="2026-01-20T01:34:22.021022642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:34:22.021175 containerd[1611]: time="2026-01-20T01:34:22.021112921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:22.021670 kubelet[2780]: E0120 01:34:22.021617 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:22.021782 kubelet[2780]: E0120 01:34:22.021675 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:22.021900 kubelet[2780]: E0120 01:34:22.021848 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mf2km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-dd7bff465-4rkgx_calico-apiserver(d9baf707-371f-47e4-9f67-1785bd6ba68b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:22.023144 kubelet[2780]: E0120 01:34:22.023070 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:34:22.045662 systemd-networkd[1506]: cali7d8a93f6232: Link UP Jan 20 01:34:22.047231 systemd-networkd[1506]: cali7d8a93f6232: Gained carrier Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.819 [INFO][4891] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0 calico-apiserver-7c8dd7d667- calico-apiserver c55441d4-7803-4009-82ca-ee9ec6a88be8 832 0 2026-01-20 01:33:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c8dd7d667 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c8dd7d667-prz7k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7d8a93f6232 [] [] }} ContainerID="fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-prz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.819 [INFO][4891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-prz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.931 [INFO][4911] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" HandleID="k8s-pod-network.fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" Workload="localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.932 [INFO][4911] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" HandleID="k8s-pod-network.fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" Workload="localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000385960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c8dd7d667-prz7k", "timestamp":"2026-01-20 01:34:21.931846709 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.932 [INFO][4911] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.932 [INFO][4911] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.932 [INFO][4911] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.950 [INFO][4911] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" host="localhost" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.963 [INFO][4911] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.977 [INFO][4911] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.982 [INFO][4911] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.986 [INFO][4911] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.986 [INFO][4911] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" host="localhost" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:21.989 [INFO][4911] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883 Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:22.019 [INFO][4911] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" host="localhost" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:22.036 [INFO][4911] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" host="localhost" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:22.036 [INFO][4911] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" host="localhost" Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:22.036 [INFO][4911] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:34:22.079315 containerd[1611]: 2026-01-20 01:34:22.036 [INFO][4911] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" HandleID="k8s-pod-network.fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" Workload="localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0" Jan 20 01:34:22.080422 containerd[1611]: 2026-01-20 01:34:22.040 [INFO][4891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-prz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0", GenerateName:"calico-apiserver-7c8dd7d667-", Namespace:"calico-apiserver", SelfLink:"", UID:"c55441d4-7803-4009-82ca-ee9ec6a88be8", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8dd7d667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c8dd7d667-prz7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d8a93f6232", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:22.080422 containerd[1611]: 2026-01-20 01:34:22.041 [INFO][4891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-prz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0" Jan 20 01:34:22.080422 containerd[1611]: 2026-01-20 01:34:22.041 [INFO][4891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d8a93f6232 ContainerID="fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-prz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0" Jan 20 01:34:22.080422 containerd[1611]: 2026-01-20 01:34:22.048 [INFO][4891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-prz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0" Jan 20 01:34:22.080422 containerd[1611]: 2026-01-20 01:34:22.048 [INFO][4891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-prz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0", GenerateName:"calico-apiserver-7c8dd7d667-", Namespace:"calico-apiserver", SelfLink:"", UID:"c55441d4-7803-4009-82ca-ee9ec6a88be8", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8dd7d667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883", Pod:"calico-apiserver-7c8dd7d667-prz7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d8a93f6232", MAC:"ce:12:5e:14:de:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:34:22.080422 containerd[1611]: 2026-01-20 01:34:22.071 [INFO][4891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" Namespace="calico-apiserver" Pod="calico-apiserver-7c8dd7d667-prz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8dd7d667--prz7k-eth0" Jan 20 01:34:22.103000 audit[4933]: NETFILTER_CFG table=filter:139 family=2 entries=63 op=nft_register_chain pid=4933 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:34:22.103000 audit[4933]: SYSCALL arch=c000003e syscall=46 success=yes exit=30648 a0=3 a1=7fff97d07090 a2=0 a3=7fff97d0707c items=0 ppid=4044 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:22.103000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:34:22.126033 containerd[1611]: time="2026-01-20T01:34:22.125897080Z" level=info msg="connecting to shim fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883" address="unix:///run/containerd/s/72d853a7837c00fa6577e1dfab6fcf5585d3e7302dcb8f34dfc6e0fbb2ea6585" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:34:22.135571 systemd-networkd[1506]: cali776f4596313: Gained IPv6LL Jan 20 01:34:22.187845 systemd[1]: Started cri-containerd-fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883.scope - libcontainer container fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883. Jan 20 01:34:22.208000 audit: BPF prog-id=254 op=LOAD Jan 20 01:34:22.209000 audit: BPF prog-id=255 op=LOAD Jan 20 01:34:22.209000 audit[4954]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4943 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665386232613035666530626439326438386135363539333961653137 Jan 20 01:34:22.209000 audit: BPF prog-id=255 op=UNLOAD Jan 20 01:34:22.209000 audit[4954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4943 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665386232613035666530626439326438386135363539333961653137 Jan 20 01:34:22.209000 audit: BPF prog-id=256 op=LOAD Jan 20 01:34:22.209000 audit[4954]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4943 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665386232613035666530626439326438386135363539333961653137 Jan 20 01:34:22.209000 audit: BPF prog-id=257 op=LOAD Jan 20 01:34:22.209000 audit[4954]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4943 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665386232613035666530626439326438386135363539333961653137 Jan 20 01:34:22.209000 audit: BPF prog-id=257 op=UNLOAD Jan 20 01:34:22.209000 audit[4954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4943 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665386232613035666530626439326438386135363539333961653137 Jan 20 01:34:22.209000 audit: BPF prog-id=256 op=UNLOAD Jan 20 01:34:22.209000 audit[4954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4943 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665386232613035666530626439326438386135363539333961653137 Jan 20 01:34:22.209000 audit: BPF prog-id=258 op=LOAD Jan 20 01:34:22.209000 audit[4954]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4943 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665386232613035666530626439326438386135363539333961653137 Jan 20 01:34:22.213006 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:34:22.273944 containerd[1611]: time="2026-01-20T01:34:22.273698595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8dd7d667-prz7k,Uid:c55441d4-7803-4009-82ca-ee9ec6a88be8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fe8b2a05fe0bd92d88a565939ae176a6a3186e53a0b16ae289ecc1ba0ad19883\"" Jan 20 01:34:22.280478 containerd[1611]: time="2026-01-20T01:34:22.280361785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:34:22.308390 kubelet[2780]: E0120 01:34:22.308340 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:34:22.310920 kubelet[2780]: E0120 01:34:22.310896 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:22.314524 kubelet[2780]: E0120 01:34:22.314492 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:22.316173 kubelet[2780]: E0120 01:34:22.315924 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:34:22.321025 kubelet[2780]: E0120 01:34:22.320397 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:34:22.327362 systemd-networkd[1506]: cali03df2440193: Gained IPv6LL Jan 20 01:34:22.329020 systemd-networkd[1506]: califdea1e6ea65: Gained IPv6LL Jan 20 01:34:22.351524 containerd[1611]: time="2026-01-20T01:34:22.351463530Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:22.367269 containerd[1611]: time="2026-01-20T01:34:22.366982244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:34:22.367798 containerd[1611]: time="2026-01-20T01:34:22.367041013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:22.367937 kubelet[2780]: E0120 01:34:22.367884 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:22.368323 kubelet[2780]: E0120 01:34:22.367944 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:22.368323 kubelet[2780]: E0120 01:34:22.368050 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79rhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8dd7d667-prz7k_calico-apiserver(c55441d4-7803-4009-82ca-ee9ec6a88be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:22.369740 kubelet[2780]: E0120 01:34:22.369627 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:34:22.396000 audit[4980]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4980 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:22.396000 audit[4980]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff6af92ad0 a2=0 a3=7fff6af92abc items=0 ppid=2936 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:22.396000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:22.415000 audit[4980]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=4980 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:22.415000 audit[4980]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff6af92ad0 a2=0 a3=7fff6af92abc items=0 ppid=2936 pid=4980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:22.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:22.903347 systemd-networkd[1506]: cali2ff82d7f0e3: Gained IPv6LL Jan 20 01:34:23.314696 kubelet[2780]: E0120 01:34:23.314600 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:34:23.314696 kubelet[2780]: E0120 01:34:23.314654 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:34:23.314696 kubelet[2780]: E0120 01:34:23.314600 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:34:23.365000 audit[4983]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4983 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:23.365000 audit[4983]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb2ca33b0 a2=0 a3=7ffeb2ca339c items=0 ppid=2936 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:23.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:23.376000 audit[4983]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=4983 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:34:23.376000 audit[4983]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeb2ca33b0 a2=0 a3=7ffeb2ca339c items=0 ppid=2936 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:34:23.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:34:23.799515 systemd-networkd[1506]: cali7d8a93f6232: Gained IPv6LL Jan 20 01:34:30.702524 containerd[1611]: time="2026-01-20T01:34:30.702412297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:34:30.793607 containerd[1611]: time="2026-01-20T01:34:30.793028160Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:30.804449 containerd[1611]: time="2026-01-20T01:34:30.803960082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:34:30.804449 containerd[1611]: time="2026-01-20T01:34:30.804040342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:30.807665 kubelet[2780]: E0120 01:34:30.806644 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:34:30.807665 kubelet[2780]: E0120 01:34:30.806698 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:34:30.807665 kubelet[2780]: E0120 01:34:30.806882 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:50d0da9d3db140cc8836270eb3a85a60,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mchlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b9db9c79-llb9v_calico-system(49316a51-69bf-4cd8-a713-083d988333bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:30.813477 containerd[1611]: time="2026-01-20T01:34:30.813259135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:34:30.883178 containerd[1611]: time="2026-01-20T01:34:30.883024154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:30.885182 containerd[1611]: time="2026-01-20T01:34:30.884856074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:34:30.885182 containerd[1611]: time="2026-01-20T01:34:30.885038670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:30.886443 kubelet[2780]: E0120 01:34:30.886390 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:34:30.890624 kubelet[2780]: E0120 01:34:30.886593 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:34:30.890624 kubelet[2780]: E0120 01:34:30.889327 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mchlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b9db9c79-llb9v_calico-system(49316a51-69bf-4cd8-a713-083d988333bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:30.890624 kubelet[2780]: E0120 01:34:30.890562 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:34:31.701221 containerd[1611]: time="2026-01-20T01:34:31.701154304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:34:31.767690 containerd[1611]: time="2026-01-20T01:34:31.767578646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:31.769337 containerd[1611]: time="2026-01-20T01:34:31.769230878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:34:31.769453 containerd[1611]: time="2026-01-20T01:34:31.769340182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:31.769748 kubelet[2780]: E0120 01:34:31.769628 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:34:31.769829 kubelet[2780]: E0120 01:34:31.769753 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:34:31.770016 kubelet[2780]: E0120 01:34:31.769928 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmwd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rs9sl_calico-system(93c423b9-f734-475b-aea9-f003af7097a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:31.771516 kubelet[2780]: E0120 01:34:31.771461 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:34:32.700593 containerd[1611]: time="2026-01-20T01:34:32.700492039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:34:32.770119 containerd[1611]: time="2026-01-20T01:34:32.769997793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:32.773072 containerd[1611]: time="2026-01-20T01:34:32.772567491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:34:32.773072 containerd[1611]: time="2026-01-20T01:34:32.772685752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:32.774014 kubelet[2780]: E0120 01:34:32.773852 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:32.774014 kubelet[2780]: E0120 01:34:32.773941 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:32.774563 kubelet[2780]: E0120 01:34:32.774167 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzvw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8dd7d667-ct8ff_calico-apiserver(c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:32.775552 kubelet[2780]: E0120 01:34:32.775408 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:34:33.699311 containerd[1611]: time="2026-01-20T01:34:33.699211345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:34:33.758525 containerd[1611]: time="2026-01-20T01:34:33.758412799Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:33.761449 containerd[1611]: time="2026-01-20T01:34:33.761328728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:34:33.761449 containerd[1611]: time="2026-01-20T01:34:33.761360360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:33.761779 kubelet[2780]: E0120 01:34:33.761656 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:33.761862 kubelet[2780]: E0120 01:34:33.761775 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:33.763160 containerd[1611]: time="2026-01-20T01:34:33.762457050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:34:33.763293 kubelet[2780]: E0120 01:34:33.762437 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79rhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8dd7d667-prz7k_calico-apiserver(c55441d4-7803-4009-82ca-ee9ec6a88be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:33.764293 kubelet[2780]: E0120 01:34:33.764180 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:34:33.829387 containerd[1611]: time="2026-01-20T01:34:33.829314180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:33.831406 containerd[1611]: time="2026-01-20T01:34:33.831075117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:34:33.831406 containerd[1611]: time="2026-01-20T01:34:33.831250865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:33.831566 kubelet[2780]: E0120 01:34:33.831504 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:33.831566 kubelet[2780]: E0120 01:34:33.831558 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:33.832000 kubelet[2780]: E0120 01:34:33.831690 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mf2km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-dd7bff465-4rkgx_calico-apiserver(d9baf707-371f-47e4-9f67-1785bd6ba68b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:33.833164 kubelet[2780]: E0120 01:34:33.833072 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:34:34.698698 containerd[1611]: time="2026-01-20T01:34:34.698640356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:34:34.769668 containerd[1611]: time="2026-01-20T01:34:34.769559560Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:34.771518 containerd[1611]: time="2026-01-20T01:34:34.771387482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:34:34.771518 containerd[1611]: time="2026-01-20T01:34:34.771465405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:34.771819 kubelet[2780]: E0120 01:34:34.771760 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:34:34.771944 kubelet[2780]: E0120 01:34:34.771824 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:34:34.772066 kubelet[2780]: E0120 01:34:34.771969 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ffsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-947d9dcc-bp5fh_calico-system(e535c75b-4142-4085-8d9d-2841894e5fe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:34.773387 kubelet[2780]: E0120 01:34:34.773287 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:34:35.699947 containerd[1611]: time="2026-01-20T01:34:35.699595430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:34:35.770789 containerd[1611]: time="2026-01-20T01:34:35.770669261Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:35.772316 containerd[1611]: time="2026-01-20T01:34:35.772245897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:34:35.772388 containerd[1611]: time="2026-01-20T01:34:35.772338459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:35.772857 kubelet[2780]: E0120 01:34:35.772654 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:34:35.772857 kubelet[2780]: E0120 01:34:35.772772 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:34:35.773658 kubelet[2780]: E0120 01:34:35.772943 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:35.775671 containerd[1611]: time="2026-01-20T01:34:35.775410934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:34:35.838409 containerd[1611]: time="2026-01-20T01:34:35.838312787Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:35.840072 containerd[1611]: time="2026-01-20T01:34:35.840026388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:34:35.840439 containerd[1611]: time="2026-01-20T01:34:35.840066932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:35.840755 kubelet[2780]: E0120 01:34:35.840647 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:34:35.840838 kubelet[2780]: E0120 01:34:35.840775 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:34:35.841042 kubelet[2780]: E0120 01:34:35.840955 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:35.843268 kubelet[2780]: E0120 01:34:35.843200 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:34:43.699395 kubelet[2780]: E0120 01:34:43.699328 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:34:44.698589 kubelet[2780]: E0120 01:34:44.698363 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:34:44.699285 kubelet[2780]: E0120 01:34:44.699220 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:34:46.698575 kubelet[2780]: E0120 01:34:46.698426 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:34:48.616298 kubelet[2780]: E0120 01:34:48.615607 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:48.708456 kubelet[2780]: E0120 01:34:48.708365 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:34:49.700927 kubelet[2780]: E0120 01:34:49.700835 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:34:49.702348 kubelet[2780]: E0120 01:34:49.702302 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:34:55.705602 containerd[1611]: time="2026-01-20T01:34:55.705240069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:34:55.764888 containerd[1611]: time="2026-01-20T01:34:55.764614096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:55.770505 containerd[1611]: time="2026-01-20T01:34:55.769926652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:34:55.770505 containerd[1611]: time="2026-01-20T01:34:55.769989563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:55.770663 kubelet[2780]: E0120 01:34:55.770449 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:34:55.770663 kubelet[2780]: E0120 01:34:55.770573 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:34:55.771304 kubelet[2780]: E0120 01:34:55.770894 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmwd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rs9sl_calico-system(93c423b9-f734-475b-aea9-f003af7097a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:55.772059 containerd[1611]: time="2026-01-20T01:34:55.771454221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:34:55.772741 kubelet[2780]: E0120 01:34:55.772609 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:34:55.834725 containerd[1611]: time="2026-01-20T01:34:55.834620188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:55.836424 containerd[1611]: time="2026-01-20T01:34:55.836299402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:34:55.836424 containerd[1611]: time="2026-01-20T01:34:55.836355990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:55.836730 kubelet[2780]: E0120 01:34:55.836611 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:34:55.836840 kubelet[2780]: E0120 01:34:55.836735 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:34:55.836989 kubelet[2780]: E0120 01:34:55.836905 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:50d0da9d3db140cc8836270eb3a85a60,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mchlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b9db9c79-llb9v_calico-system(49316a51-69bf-4cd8-a713-083d988333bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:55.839443 containerd[1611]: time="2026-01-20T01:34:55.839373402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:34:55.899400 containerd[1611]: time="2026-01-20T01:34:55.899277728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:55.902330 containerd[1611]: time="2026-01-20T01:34:55.902263229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:34:55.902405 containerd[1611]: time="2026-01-20T01:34:55.902387001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:55.902782 kubelet[2780]: E0120 01:34:55.902620 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:34:55.902883 kubelet[2780]: E0120 01:34:55.902845 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:34:55.903138 kubelet[2780]: E0120 01:34:55.903015 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mchlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b9db9c79-llb9v_calico-system(49316a51-69bf-4cd8-a713-083d988333bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:55.904442 kubelet[2780]: E0120 01:34:55.904349 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:34:58.701478 containerd[1611]: time="2026-01-20T01:34:58.701312413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:34:58.770022 containerd[1611]: time="2026-01-20T01:34:58.769932795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:34:58.771864 containerd[1611]: time="2026-01-20T01:34:58.771718706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:34:58.771977 containerd[1611]: time="2026-01-20T01:34:58.771895814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:34:58.772280 kubelet[2780]: E0120 01:34:58.772177 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:58.772280 kubelet[2780]: E0120 01:34:58.772246 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:34:58.772790 kubelet[2780]: E0120 01:34:58.772391 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79rhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8dd7d667-prz7k_calico-apiserver(c55441d4-7803-4009-82ca-ee9ec6a88be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:34:58.774167 kubelet[2780]: E0120 01:34:58.774049 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:35:00.700149 containerd[1611]: time="2026-01-20T01:35:00.699901346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:35:00.785347 containerd[1611]: time="2026-01-20T01:35:00.785267906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:00.787153 containerd[1611]: time="2026-01-20T01:35:00.786986701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:35:00.787415 kubelet[2780]: E0120 01:35:00.787350 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:35:00.787875 kubelet[2780]: E0120 01:35:00.787419 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:35:00.787875 kubelet[2780]: E0120 01:35:00.787544 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:00.788631 containerd[1611]: time="2026-01-20T01:35:00.788465689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:00.790327 containerd[1611]: time="2026-01-20T01:35:00.790193321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:35:00.849958 containerd[1611]: time="2026-01-20T01:35:00.849851875Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:00.852761 containerd[1611]: time="2026-01-20T01:35:00.851655088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:35:00.852761 containerd[1611]: time="2026-01-20T01:35:00.851741558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:00.853546 kubelet[2780]: E0120 01:35:00.852169 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:35:00.853546 kubelet[2780]: E0120 01:35:00.852231 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:35:00.853546 kubelet[2780]: E0120 01:35:00.852368 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:00.854616 kubelet[2780]: E0120 01:35:00.854067 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:35:01.699665 containerd[1611]: time="2026-01-20T01:35:01.699330246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:35:01.776578 containerd[1611]: time="2026-01-20T01:35:01.776499701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:01.780366 containerd[1611]: time="2026-01-20T01:35:01.779985554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:35:01.780366 containerd[1611]: time="2026-01-20T01:35:01.780289130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:01.781069 kubelet[2780]: E0120 01:35:01.780782 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:35:01.781069 kubelet[2780]: E0120 01:35:01.780846 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:35:01.781069 kubelet[2780]: E0120 01:35:01.780998 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzvw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8dd7d667-ct8ff_calico-apiserver(c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:01.782224 kubelet[2780]: E0120 01:35:01.782191 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:35:02.699588 containerd[1611]: time="2026-01-20T01:35:02.699511081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:35:02.764837 containerd[1611]: time="2026-01-20T01:35:02.764384590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:02.767765 containerd[1611]: time="2026-01-20T01:35:02.767595193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:02.767765 containerd[1611]: time="2026-01-20T01:35:02.767617319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:35:02.768451 kubelet[2780]: E0120 01:35:02.768372 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:35:02.769900 kubelet[2780]: E0120 01:35:02.768621 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:35:02.771280 kubelet[2780]: E0120 01:35:02.770909 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ffsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-947d9dcc-bp5fh_calico-system(e535c75b-4142-4085-8d9d-2841894e5fe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:02.772417 kubelet[2780]: E0120 01:35:02.772384 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:35:02.772865 containerd[1611]: time="2026-01-20T01:35:02.772454394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:35:02.844175 containerd[1611]: time="2026-01-20T01:35:02.844031109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:02.847166 containerd[1611]: time="2026-01-20T01:35:02.847050850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:35:02.847392 containerd[1611]: time="2026-01-20T01:35:02.847151878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:02.847503 kubelet[2780]: E0120 01:35:02.847434 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:35:02.847503 kubelet[2780]: E0120 01:35:02.847497 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:35:02.847862 kubelet[2780]: E0120 01:35:02.847807 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mf2km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-dd7bff465-4rkgx_calico-apiserver(d9baf707-371f-47e4-9f67-1785bd6ba68b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:02.850152 kubelet[2780]: E0120 01:35:02.849391 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:35:07.700267 kubelet[2780]: E0120 01:35:07.699254 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:35:08.697918 kubelet[2780]: E0120 01:35:08.697807 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:09.699392 kubelet[2780]: E0120 01:35:09.699020 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:35:12.699426 kubelet[2780]: E0120 01:35:12.699239 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:35:13.697453 kubelet[2780]: E0120 01:35:13.697361 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:13.698133 kubelet[2780]: E0120 01:35:13.697986 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:14.698771 kubelet[2780]: E0120 01:35:14.698571 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:35:14.700036 kubelet[2780]: E0120 01:35:14.699879 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:35:15.703826 kubelet[2780]: E0120 01:35:15.702967 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:35:16.699427 kubelet[2780]: E0120 01:35:16.699344 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:35:17.703306 kubelet[2780]: E0120 01:35:17.703193 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:18.702128 kubelet[2780]: E0120 01:35:18.701968 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:21.698962 kubelet[2780]: E0120 01:35:21.698873 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:35:22.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.144:22-10.0.0.1:54978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:22.552994 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:54978.service - OpenSSH per-connection server daemon (10.0.0.1:54978). Jan 20 01:35:22.559162 kernel: kauditd_printk_skb: 130 callbacks suppressed Jan 20 01:35:22.559256 kernel: audit: type=1130 audit(1768872922.552:733): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.144:22-10.0.0.1:54978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:22.669000 audit[5089]: USER_ACCT pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.670912 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 54978 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:35:22.674555 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:22.671000 audit[5089]: CRED_ACQ pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.682271 systemd-logind[1578]: New session 9 of user core. Jan 20 01:35:22.690165 kernel: audit: type=1101 audit(1768872922.669:734): pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.690338 kernel: audit: type=1103 audit(1768872922.671:735): pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.690377 kernel: audit: type=1006 audit(1768872922.671:736): pid=5089 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 20 01:35:22.671000 audit[5089]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd11f01fc0 a2=3 a3=0 items=0 ppid=1 pid=5089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:22.706510 kernel: audit: type=1300 audit(1768872922.671:736): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd11f01fc0 a2=3 a3=0 items=0 ppid=1 pid=5089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:22.706663 kernel: audit: type=1327 audit(1768872922.671:736): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:22.671000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:22.712547 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:35:22.724000 audit[5089]: USER_START pid=5089 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.742162 kernel: audit: type=1105 audit(1768872922.724:737): pid=5089 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.727000 audit[5094]: CRED_ACQ pid=5094 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.755242 kernel: audit: type=1103 audit(1768872922.727:738): pid=5094 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.890386 sshd[5094]: Connection closed by 10.0.0.1 port 54978 Jan 20 01:35:22.892816 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:22.895000 audit[5089]: USER_END pid=5089 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.901201 systemd-logind[1578]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:35:22.904403 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:54978.service: Deactivated successfully. Jan 20 01:35:22.895000 audit[5089]: CRED_DISP pid=5089 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.909265 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:35:22.911865 systemd-logind[1578]: Removed session 9. Jan 20 01:35:22.917761 kernel: audit: type=1106 audit(1768872922.895:739): pid=5089 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.917880 kernel: audit: type=1104 audit(1768872922.895:740): pid=5089 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:22.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.144:22-10.0.0.1:54978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:23.699916 kubelet[2780]: E0120 01:35:23.699716 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:35:24.699991 kubelet[2780]: E0120 01:35:24.699890 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:35:26.699068 kubelet[2780]: E0120 01:35:26.698914 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:35:27.924137 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:35:27.924248 kernel: audit: type=1130 audit(1768872927.913:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.144:22-10.0.0.1:54986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:27.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.144:22-10.0.0.1:54986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:27.914643 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:54986.service - OpenSSH per-connection server daemon (10.0.0.1:54986). Jan 20 01:35:27.993000 audit[5108]: USER_ACCT pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:27.995006 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 54986 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:35:27.997343 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:27.995000 audit[5108]: CRED_ACQ pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:28.007444 systemd-logind[1578]: New session 10 of user core. Jan 20 01:35:28.016418 kernel: audit: type=1101 audit(1768872927.993:743): pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:28.016522 kernel: audit: type=1103 audit(1768872927.995:744): pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:28.016565 kernel: audit: type=1006 audit(1768872927.995:745): pid=5108 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 20 01:35:27.995000 audit[5108]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcaa201f20 a2=3 a3=0 items=0 ppid=1 pid=5108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:28.034325 kernel: audit: type=1300 audit(1768872927.995:745): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcaa201f20 a2=3 a3=0 items=0 ppid=1 pid=5108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:27.995000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:28.036072 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:35:28.038662 kernel: audit: type=1327 audit(1768872927.995:745): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:28.040000 audit[5108]: USER_START pid=5108 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:28.055228 kernel: audit: type=1105 audit(1768872928.040:746): pid=5108 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:28.056000 audit[5112]: CRED_ACQ pid=5112 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:28.068187 kernel: audit: type=1103 audit(1768872928.056:747): pid=5112 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:28.202460 sshd[5112]: Connection closed by 10.0.0.1 port 54986 Jan 20 01:35:28.202924 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:28.204000 audit[5108]: USER_END pid=5108 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:28.209926 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:54986.service: Deactivated successfully. Jan 20 01:35:28.213224 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:35:28.215741 systemd-logind[1578]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:35:28.204000 audit[5108]: CRED_DISP pid=5108 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:28.220427 systemd-logind[1578]: Removed session 10. Jan 20 01:35:28.226070 kernel: audit: type=1106 audit(1768872928.204:748): pid=5108 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:28.226214 kernel: audit: type=1104 audit(1768872928.204:749): pid=5108 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:28.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.144:22-10.0.0.1:54986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:28.698911 kubelet[2780]: E0120 01:35:28.698491 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:35:28.698911 kubelet[2780]: E0120 01:35:28.698808 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:35:29.699866 kubelet[2780]: E0120 01:35:29.699776 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:35:33.224805 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:35796.service - OpenSSH per-connection server daemon (10.0.0.1:35796). Jan 20 01:35:33.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.144:22-10.0.0.1:35796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:33.226367 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:35:33.226403 kernel: audit: type=1130 audit(1768872933.223:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.144:22-10.0.0.1:35796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:33.301000 audit[5128]: USER_ACCT pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:33.302426 sshd[5128]: Accepted publickey for core from 10.0.0.1 port 35796 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:35:33.305505 sshd-session[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:33.303000 audit[5128]: CRED_ACQ pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:33.312285 systemd-logind[1578]: New session 11 of user core. Jan 20 01:35:33.321629 kernel: audit: type=1101 audit(1768872933.301:752): pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:33.321879 kernel: audit: type=1103 audit(1768872933.303:753): pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:33.321922 kernel: audit: type=1006 audit(1768872933.303:754): pid=5128 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 01:35:33.326870 kernel: audit: type=1300 audit(1768872933.303:754): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc11613290 a2=3 a3=0 items=0 ppid=1 pid=5128 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:33.303000 audit[5128]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc11613290 a2=3 a3=0 items=0 ppid=1 pid=5128 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:33.303000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:33.340027 kernel: audit: type=1327 audit(1768872933.303:754): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:33.346498 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:35:33.351000 audit[5128]: USER_START pid=5128 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:33.351000 audit[5132]: CRED_ACQ pid=5132 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:33.373439 kernel: audit: type=1105 audit(1768872933.351:755): pid=5128 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:33.373508 kernel: audit: type=1103 audit(1768872933.351:756): pid=5132 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:33.467405 sshd[5132]: Connection closed by 10.0.0.1 port 35796 Jan 20 01:35:33.467725 sshd-session[5128]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:33.468000 audit[5128]: USER_END pid=5128 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:33.473498 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:35796.service: Deactivated successfully. Jan 20 01:35:33.476380 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:35:33.477483 systemd-logind[1578]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:35:33.479628 systemd-logind[1578]: Removed session 11. Jan 20 01:35:33.484202 kernel: audit: type=1106 audit(1768872933.468:757): pid=5128 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:33.468000 audit[5128]: CRED_DISP pid=5128 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:33.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.144:22-10.0.0.1:35796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:33.501152 kernel: audit: type=1104 audit(1768872933.468:758): pid=5128 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:34.698176 kubelet[2780]: E0120 01:35:34.698125 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:35:35.704496 kubelet[2780]: E0120 01:35:35.704423 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:35:35.705451 kubelet[2780]: E0120 01:35:35.705378 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:35:38.485586 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:35810.service - OpenSSH per-connection server daemon (10.0.0.1:35810). Jan 20 01:35:38.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.144:22-10.0.0.1:35810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:38.499043 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:35:38.499242 kernel: audit: type=1130 audit(1768872938.484:760): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.144:22-10.0.0.1:35810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:38.557000 audit[5149]: USER_ACCT pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:38.559276 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 35810 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:35:38.562367 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:38.559000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:38.571387 systemd-logind[1578]: New session 12 of user core. Jan 20 01:35:38.575752 kernel: audit: type=1101 audit(1768872938.557:761): pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:38.575808 kernel: audit: type=1103 audit(1768872938.559:762): pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:38.575846 kernel: audit: type=1006 audit(1768872938.559:763): pid=5149 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 01:35:38.583279 kernel: audit: type=1300 audit(1768872938.559:763): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff53b03590 a2=3 a3=0 items=0 ppid=1 pid=5149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:38.559000 audit[5149]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff53b03590 a2=3 a3=0 items=0 ppid=1 pid=5149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:38.559000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:38.605414 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:35:38.608347 kernel: audit: type=1327 audit(1768872938.559:763): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:38.610000 audit[5149]: USER_START pid=5149 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:38.623147 kernel: audit: type=1105 audit(1768872938.610:764): pid=5149 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:38.613000 audit[5153]: CRED_ACQ pid=5153 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:38.632178 kernel: audit: type=1103 audit(1768872938.613:765): pid=5153 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:38.697342 kubelet[2780]: E0120 01:35:38.697254 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:38.750479 sshd[5153]: Connection closed by 10.0.0.1 port 35810 Jan 20 01:35:38.750657 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:38.753000 audit[5149]: USER_END pid=5149 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:38.758212 systemd-logind[1578]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:35:38.760490 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:35810.service: Deactivated successfully. Jan 20 01:35:38.767154 kernel: audit: type=1106 audit(1768872938.753:766): pid=5149 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:38.763830 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:35:38.766647 systemd-logind[1578]: Removed session 12. Jan 20 01:35:38.753000 audit[5149]: CRED_DISP pid=5149 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:38.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.144:22-10.0.0.1:35810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:38.775149 kernel: audit: type=1104 audit(1768872938.753:767): pid=5149 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:39.701765 kubelet[2780]: E0120 01:35:39.701655 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:35:40.698545 kubelet[2780]: E0120 01:35:40.698476 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:35:41.699223 kubelet[2780]: E0120 01:35:41.699148 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:35:43.697937 kubelet[2780]: E0120 01:35:43.697832 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:43.701049 containerd[1611]: time="2026-01-20T01:35:43.701019899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:35:43.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.144:22-10.0.0.1:59962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:43.764944 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:59962.service - OpenSSH per-connection server daemon (10.0.0.1:59962). Jan 20 01:35:43.784181 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:35:43.784308 kernel: audit: type=1130 audit(1768872943.763:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.144:22-10.0.0.1:59962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:43.784349 kubelet[2780]: E0120 01:35:43.782251 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:35:43.784349 kubelet[2780]: E0120 01:35:43.782302 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:35:43.784495 containerd[1611]: time="2026-01-20T01:35:43.780206008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:43.784495 containerd[1611]: time="2026-01-20T01:35:43.781975891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:35:43.784495 containerd[1611]: time="2026-01-20T01:35:43.782015028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:43.808154 kubelet[2780]: E0120 01:35:43.807350 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:43.814147 containerd[1611]: time="2026-01-20T01:35:43.811878929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:35:43.873957 containerd[1611]: time="2026-01-20T01:35:43.873900175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:43.875878 containerd[1611]: time="2026-01-20T01:35:43.875484383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:35:43.876114 containerd[1611]: time="2026-01-20T01:35:43.875910287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:43.876310 kubelet[2780]: E0120 01:35:43.876274 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:35:43.876635 kubelet[2780]: E0120 01:35:43.876562 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:35:43.877012 kubelet[2780]: E0120 01:35:43.876959 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:43.878567 kubelet[2780]: E0120 01:35:43.878503 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:35:43.878000 audit[5175]: USER_ACCT pid=5175 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:43.883420 sshd-session[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:43.886554 sshd[5175]: Accepted publickey for core from 10.0.0.1 port 59962 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:35:43.880000 audit[5175]: CRED_ACQ pid=5175 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:43.908364 kernel: audit: type=1101 audit(1768872943.878:770): pid=5175 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:43.908415 kernel: audit: type=1103 audit(1768872943.880:771): pid=5175 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:43.908444 kernel: audit: type=1006 audit(1768872943.881:772): pid=5175 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 20 01:35:43.907985 systemd-logind[1578]: New session 13 of user core. Jan 20 01:35:43.881000 audit[5175]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff384c0b0 a2=3 a3=0 items=0 ppid=1 pid=5175 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:43.924415 kernel: audit: type=1300 audit(1768872943.881:772): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff384c0b0 a2=3 a3=0 items=0 ppid=1 pid=5175 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:43.881000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:43.931180 kernel: audit: type=1327 audit(1768872943.881:772): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:43.931531 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:35:43.935000 audit[5175]: USER_START pid=5175 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:43.939000 audit[5179]: CRED_ACQ pid=5179 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:43.956484 kernel: audit: type=1105 audit(1768872943.935:773): pid=5175 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:43.956579 kernel: audit: type=1103 audit(1768872943.939:774): pid=5179 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:44.076280 sshd[5179]: Connection closed by 10.0.0.1 port 59962 Jan 20 01:35:44.078307 sshd-session[5175]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:44.079000 audit[5175]: USER_END pid=5175 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:44.083000 audit[5175]: CRED_DISP pid=5175 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:44.098820 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:59962.service: Deactivated successfully. Jan 20 01:35:44.099449 systemd-logind[1578]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:35:44.102786 kernel: audit: type=1106 audit(1768872944.079:775): pid=5175 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:44.102854 kernel: audit: type=1104 audit(1768872944.083:776): pid=5175 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:44.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.144:22-10.0.0.1:59962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:44.104463 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:35:44.108272 systemd-logind[1578]: Removed session 13. Jan 20 01:35:46.699441 containerd[1611]: time="2026-01-20T01:35:46.699345700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:35:46.758921 containerd[1611]: time="2026-01-20T01:35:46.758743387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:46.760161 containerd[1611]: time="2026-01-20T01:35:46.760049826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:35:46.760161 containerd[1611]: time="2026-01-20T01:35:46.760171072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:46.760454 kubelet[2780]: E0120 01:35:46.760388 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:35:46.761034 kubelet[2780]: E0120 01:35:46.760525 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:35:46.761267 containerd[1611]: time="2026-01-20T01:35:46.760894772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:35:46.762001 kubelet[2780]: E0120 01:35:46.761916 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79rhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8dd7d667-prz7k_calico-apiserver(c55441d4-7803-4009-82ca-ee9ec6a88be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:46.763353 kubelet[2780]: E0120 01:35:46.763222 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:35:46.828399 containerd[1611]: time="2026-01-20T01:35:46.828300592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:46.829986 containerd[1611]: time="2026-01-20T01:35:46.829885311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:35:46.829986 containerd[1611]: time="2026-01-20T01:35:46.829972273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:46.830371 kubelet[2780]: E0120 01:35:46.830278 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:35:46.830424 kubelet[2780]: E0120 01:35:46.830372 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:35:46.830744 kubelet[2780]: E0120 01:35:46.830523 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:50d0da9d3db140cc8836270eb3a85a60,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mchlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b9db9c79-llb9v_calico-system(49316a51-69bf-4cd8-a713-083d988333bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:46.833632 containerd[1611]: time="2026-01-20T01:35:46.833218151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:35:46.889512 containerd[1611]: time="2026-01-20T01:35:46.889395334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:46.891162 containerd[1611]: time="2026-01-20T01:35:46.891062036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:35:46.891162 containerd[1611]: time="2026-01-20T01:35:46.891153242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:46.892614 kubelet[2780]: E0120 01:35:46.892509 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:35:46.892614 kubelet[2780]: E0120 01:35:46.892576 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:35:46.892776 kubelet[2780]: E0120 01:35:46.892721 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mchlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b9db9c79-llb9v_calico-system(49316a51-69bf-4cd8-a713-083d988333bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:46.893953 kubelet[2780]: E0120 01:35:46.893917 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:35:49.101205 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:59966.service - OpenSSH per-connection server daemon (10.0.0.1:59966). Jan 20 01:35:49.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.144:22-10.0.0.1:59966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:49.103552 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:35:49.103617 kernel: audit: type=1130 audit(1768872949.100:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.144:22-10.0.0.1:59966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:49.195000 audit[5222]: USER_ACCT pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.196944 sshd[5222]: Accepted publickey for core from 10.0.0.1 port 59966 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:35:49.199557 sshd-session[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:49.195000 audit[5222]: CRED_ACQ pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.212879 systemd-logind[1578]: New session 14 of user core. Jan 20 01:35:49.213882 kernel: audit: type=1101 audit(1768872949.195:779): pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.213950 kernel: audit: type=1103 audit(1768872949.195:780): pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.213977 kernel: audit: type=1006 audit(1768872949.195:781): pid=5222 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 20 01:35:49.195000 audit[5222]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3f595770 a2=3 a3=0 items=0 ppid=1 pid=5222 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:49.226179 kernel: audit: type=1300 audit(1768872949.195:781): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3f595770 a2=3 a3=0 items=0 ppid=1 pid=5222 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:49.226303 kernel: audit: type=1327 audit(1768872949.195:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:49.195000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:49.230444 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:35:49.232000 audit[5222]: USER_START pid=5222 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.232000 audit[5226]: CRED_ACQ pid=5226 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.256901 kernel: audit: type=1105 audit(1768872949.232:782): pid=5222 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.257052 kernel: audit: type=1103 audit(1768872949.232:783): pid=5226 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.332825 sshd[5226]: Connection closed by 10.0.0.1 port 59966 Jan 20 01:35:49.333230 sshd-session[5222]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:49.333000 audit[5222]: USER_END pid=5222 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.337596 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:59966.service: Deactivated successfully. Jan 20 01:35:49.340005 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:35:49.342541 systemd-logind[1578]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:35:49.344340 systemd-logind[1578]: Removed session 14. Jan 20 01:35:49.333000 audit[5222]: CRED_DISP pid=5222 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.355598 kernel: audit: type=1106 audit(1768872949.333:784): pid=5222 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.355662 kernel: audit: type=1104 audit(1768872949.333:785): pid=5222 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:49.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.144:22-10.0.0.1:59966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:50.700053 containerd[1611]: time="2026-01-20T01:35:50.699996053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:35:50.770389 containerd[1611]: time="2026-01-20T01:35:50.770312143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:50.771764 containerd[1611]: time="2026-01-20T01:35:50.771651658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:35:50.771764 containerd[1611]: time="2026-01-20T01:35:50.771721253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:50.772111 kubelet[2780]: E0120 01:35:50.772009 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:35:50.772508 kubelet[2780]: E0120 01:35:50.772181 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:35:50.772508 kubelet[2780]: E0120 01:35:50.772451 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmwd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rs9sl_calico-system(93c423b9-f734-475b-aea9-f003af7097a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:50.772863 containerd[1611]: time="2026-01-20T01:35:50.772803223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:35:50.774003 kubelet[2780]: E0120 01:35:50.773902 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:35:50.835724 containerd[1611]: time="2026-01-20T01:35:50.835630802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:50.837457 containerd[1611]: time="2026-01-20T01:35:50.837401497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:35:50.837457 containerd[1611]: time="2026-01-20T01:35:50.837505842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:50.837777 kubelet[2780]: E0120 01:35:50.837714 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:35:50.837846 kubelet[2780]: E0120 01:35:50.837776 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:35:50.837973 kubelet[2780]: E0120 01:35:50.837903 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ffsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-947d9dcc-bp5fh_calico-system(e535c75b-4142-4085-8d9d-2841894e5fe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:50.839215 kubelet[2780]: E0120 01:35:50.839178 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:35:52.699446 containerd[1611]: time="2026-01-20T01:35:52.699393547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:35:52.763722 containerd[1611]: time="2026-01-20T01:35:52.763591688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:52.765314 containerd[1611]: time="2026-01-20T01:35:52.765216606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:35:52.765441 containerd[1611]: time="2026-01-20T01:35:52.765355476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:52.765678 kubelet[2780]: E0120 01:35:52.765583 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:35:52.765678 kubelet[2780]: E0120 01:35:52.765653 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:35:52.766221 kubelet[2780]: E0120 01:35:52.765801 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mf2km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-dd7bff465-4rkgx_calico-apiserver(d9baf707-371f-47e4-9f67-1785bd6ba68b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:52.767129 kubelet[2780]: E0120 01:35:52.767032 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:35:54.365129 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:35:54.365239 kernel: audit: type=1130 audit(1768872954.361:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.144:22-10.0.0.1:41794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:54.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.144:22-10.0.0.1:41794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:54.362494 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:41794.service - OpenSSH per-connection server daemon (10.0.0.1:41794). Jan 20 01:35:54.503258 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 41794 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:35:54.501000 audit[5247]: USER_ACCT pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.518939 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:54.508000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.539654 kernel: audit: type=1101 audit(1768872954.501:788): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.539790 kernel: audit: type=1103 audit(1768872954.508:789): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.539837 kernel: audit: type=1006 audit(1768872954.508:790): pid=5247 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 20 01:35:54.544567 systemd-logind[1578]: New session 15 of user core. Jan 20 01:35:54.508000 audit[5247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff706c4970 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:54.579325 kernel: audit: type=1300 audit(1768872954.508:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff706c4970 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:54.508000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:54.591289 kernel: audit: type=1327 audit(1768872954.508:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:54.597206 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:35:54.606000 audit[5247]: USER_START pid=5247 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.640640 kernel: audit: type=1105 audit(1768872954.606:791): pid=5247 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.640823 kernel: audit: type=1103 audit(1768872954.615:792): pid=5251 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.615000 audit[5251]: CRED_ACQ pid=5251 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.799235 sshd[5251]: Connection closed by 10.0.0.1 port 41794 Jan 20 01:35:54.800390 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:54.802000 audit[5247]: USER_END pid=5247 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.802000 audit[5247]: CRED_DISP pid=5247 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.824605 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:41794.service: Deactivated successfully. Jan 20 01:35:54.829014 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:35:54.832866 kernel: audit: type=1106 audit(1768872954.802:793): pid=5247 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.832967 kernel: audit: type=1104 audit(1768872954.802:794): pid=5247 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:54.831544 systemd-logind[1578]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:35:54.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.144:22-10.0.0.1:41794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:54.835501 systemd-logind[1578]: Removed session 15. Jan 20 01:35:55.700878 kubelet[2780]: E0120 01:35:55.699546 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:55.705500 containerd[1611]: time="2026-01-20T01:35:55.705167358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:35:55.771391 containerd[1611]: time="2026-01-20T01:35:55.771065968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:35:55.779343 containerd[1611]: time="2026-01-20T01:35:55.779267325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:35:55.779644 containerd[1611]: time="2026-01-20T01:35:55.779424129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:35:55.780667 kubelet[2780]: E0120 01:35:55.780045 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:35:55.780667 kubelet[2780]: E0120 01:35:55.780352 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:35:55.792014 kubelet[2780]: E0120 01:35:55.791921 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzvw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8dd7d667-ct8ff_calico-apiserver(c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:35:55.793782 kubelet[2780]: E0120 01:35:55.793679 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:35:57.700575 kubelet[2780]: E0120 01:35:57.700503 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:35:59.700621 kubelet[2780]: E0120 01:35:59.700417 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:35:59.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.144:22-10.0.0.1:41806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:59.820937 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:41806.service - OpenSSH per-connection server daemon (10.0.0.1:41806). Jan 20 01:35:59.823923 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:35:59.823976 kernel: audit: type=1130 audit(1768872959.820:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.144:22-10.0.0.1:41806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:35:59.899000 audit[5280]: USER_ACCT pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:59.900978 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 41806 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:35:59.904056 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:59.901000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:59.912450 systemd-logind[1578]: New session 16 of user core. Jan 20 01:35:59.918234 kernel: audit: type=1101 audit(1768872959.899:797): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:59.918291 kernel: audit: type=1103 audit(1768872959.901:798): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:59.924795 kernel: audit: type=1006 audit(1768872959.901:799): pid=5280 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 01:35:59.924861 kernel: audit: type=1300 audit(1768872959.901:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0187f9a0 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:59.901000 audit[5280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0187f9a0 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:35:59.901000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:59.941499 kernel: audit: type=1327 audit(1768872959.901:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:35:59.950545 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:35:59.955000 audit[5280]: USER_START pid=5280 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:59.958000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:59.982034 kernel: audit: type=1105 audit(1768872959.955:800): pid=5280 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:35:59.982203 kernel: audit: type=1103 audit(1768872959.958:801): pid=5284 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:00.095145 sshd[5284]: Connection closed by 10.0.0.1 port 41806 Jan 20 01:36:00.095335 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:00.097000 audit[5280]: USER_END pid=5280 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:00.103880 systemd-logind[1578]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:36:00.106070 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:41806.service: Deactivated successfully. Jan 20 01:36:00.110356 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:36:00.111132 kernel: audit: type=1106 audit(1768872960.097:802): pid=5280 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:00.097000 audit[5280]: CRED_DISP pid=5280 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:00.118676 systemd-logind[1578]: Removed session 16. Jan 20 01:36:00.126144 kernel: audit: type=1104 audit(1768872960.097:803): pid=5280 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:00.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.144:22-10.0.0.1:41806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:00.704351 kubelet[2780]: E0120 01:36:00.704291 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:36:01.700513 kubelet[2780]: E0120 01:36:01.699355 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:36:05.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.144:22-10.0.0.1:41102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:05.155331 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:41102.service - OpenSSH per-connection server daemon (10.0.0.1:41102). Jan 20 01:36:05.201389 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:36:05.201540 kernel: audit: type=1130 audit(1768872965.154:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.144:22-10.0.0.1:41102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:05.487000 audit[5299]: USER_ACCT pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:05.492795 sshd[5299]: Accepted publickey for core from 10.0.0.1 port 41102 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:05.497000 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:05.522178 kernel: audit: type=1101 audit(1768872965.487:806): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:05.492000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:05.528179 systemd-logind[1578]: New session 17 of user core. Jan 20 01:36:05.563874 kernel: audit: type=1103 audit(1768872965.492:807): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:05.564044 kernel: audit: type=1006 audit(1768872965.492:808): pid=5299 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 20 01:36:05.564159 kernel: audit: type=1300 audit(1768872965.492:808): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf1d18c30 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:05.492000 audit[5299]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf1d18c30 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:05.600837 kernel: audit: type=1327 audit(1768872965.492:808): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:05.492000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:05.615951 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:36:05.627000 audit[5299]: USER_START pid=5299 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:05.638000 audit[5303]: CRED_ACQ pid=5303 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:05.666932 kernel: audit: type=1105 audit(1768872965.627:809): pid=5299 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:05.667042 kernel: audit: type=1103 audit(1768872965.638:810): pid=5303 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:05.711833 kubelet[2780]: E0120 01:36:05.708233 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:36:06.323056 sshd[5303]: Connection closed by 10.0.0.1 port 41102 Jan 20 01:36:06.322060 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:06.330000 audit[5299]: USER_END pid=5299 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:06.330000 audit[5299]: CRED_DISP pid=5299 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:06.372882 kernel: audit: type=1106 audit(1768872966.330:811): pid=5299 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:06.373306 kernel: audit: type=1104 audit(1768872966.330:812): pid=5299 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:06.419671 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:41102.service: Deactivated successfully. Jan 20 01:36:06.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.144:22-10.0.0.1:41102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:06.429980 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:36:06.440848 systemd-logind[1578]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:36:06.457882 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:41118.service - OpenSSH per-connection server daemon (10.0.0.1:41118). Jan 20 01:36:06.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.144:22-10.0.0.1:41118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:06.460171 systemd-logind[1578]: Removed session 17. Jan 20 01:36:06.654000 audit[5318]: USER_ACCT pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:06.661171 sshd[5318]: Accepted publickey for core from 10.0.0.1 port 41118 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:06.663000 audit[5318]: CRED_ACQ pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:06.663000 audit[5318]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4d9bbd40 a2=3 a3=0 items=0 ppid=1 pid=5318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:06.663000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:06.669622 sshd-session[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:06.706051 kubelet[2780]: E0120 01:36:06.703489 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:36:06.711202 systemd-logind[1578]: New session 18 of user core. Jan 20 01:36:06.715506 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:36:06.733000 audit[5318]: USER_START pid=5318 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:06.742000 audit[5322]: CRED_ACQ pid=5322 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:07.215988 sshd[5322]: Connection closed by 10.0.0.1 port 41118 Jan 20 01:36:07.218315 sshd-session[5318]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:07.223000 audit[5318]: USER_END pid=5318 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:07.223000 audit[5318]: CRED_DISP pid=5318 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:07.260617 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:41118.service: Deactivated successfully. Jan 20 01:36:07.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.144:22-10.0.0.1:41118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:07.271192 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:36:07.275316 systemd-logind[1578]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:36:07.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.144:22-10.0.0.1:41122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:07.292605 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:41122.service - OpenSSH per-connection server daemon (10.0.0.1:41122). Jan 20 01:36:07.305532 systemd-logind[1578]: Removed session 18. Jan 20 01:36:07.513000 audit[5334]: USER_ACCT pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:07.518360 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 41122 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:07.521000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:07.521000 audit[5334]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb9393f00 a2=3 a3=0 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:07.521000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:07.525935 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:07.550529 systemd-logind[1578]: New session 19 of user core. Jan 20 01:36:07.563429 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:36:07.578000 audit[5334]: USER_START pid=5334 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:07.583000 audit[5338]: CRED_ACQ pid=5338 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:07.933692 sshd[5338]: Connection closed by 10.0.0.1 port 41122 Jan 20 01:36:07.936232 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:07.944000 audit[5334]: USER_END pid=5334 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:07.944000 audit[5334]: CRED_DISP pid=5334 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:07.959405 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:41122.service: Deactivated successfully. Jan 20 01:36:07.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.144:22-10.0.0.1:41122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:07.979959 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:36:07.995066 systemd-logind[1578]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:36:08.002172 systemd-logind[1578]: Removed session 19. Jan 20 01:36:08.703523 kubelet[2780]: E0120 01:36:08.703433 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:36:08.723325 kubelet[2780]: E0120 01:36:08.709013 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:36:11.703162 kubelet[2780]: E0120 01:36:11.701694 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:36:12.707172 kubelet[2780]: E0120 01:36:12.705584 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:36:13.003977 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 01:36:13.004184 kernel: audit: type=1130 audit(1768872972.995:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.144:22-10.0.0.1:33284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:12.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.144:22-10.0.0.1:33284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:12.999437 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:33284.service - OpenSSH per-connection server daemon (10.0.0.1:33284). Jan 20 01:36:13.172000 audit[5352]: USER_ACCT pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.180389 sshd[5352]: Accepted publickey for core from 10.0.0.1 port 33284 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:13.182956 sshd-session[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:13.199310 kernel: audit: type=1101 audit(1768872973.172:833): pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.177000 audit[5352]: CRED_ACQ pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.210154 systemd-logind[1578]: New session 20 of user core. Jan 20 01:36:13.219851 kernel: audit: type=1103 audit(1768872973.177:834): pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.220012 kernel: audit: type=1006 audit(1768872973.177:835): pid=5352 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 20 01:36:13.177000 audit[5352]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd6ee9920 a2=3 a3=0 items=0 ppid=1 pid=5352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:13.241319 kernel: audit: type=1300 audit(1768872973.177:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd6ee9920 a2=3 a3=0 items=0 ppid=1 pid=5352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:13.245508 kernel: audit: type=1327 audit(1768872973.177:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:13.177000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:13.246516 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:36:13.253000 audit[5352]: USER_START pid=5352 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.260000 audit[5356]: CRED_ACQ pid=5356 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.294884 kernel: audit: type=1105 audit(1768872973.253:836): pid=5352 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.295012 kernel: audit: type=1103 audit(1768872973.260:837): pid=5356 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.498810 sshd[5356]: Connection closed by 10.0.0.1 port 33284 Jan 20 01:36:13.499292 sshd-session[5352]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:13.501000 audit[5352]: USER_END pid=5352 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.512071 systemd-logind[1578]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:36:13.514622 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:33284.service: Deactivated successfully. Jan 20 01:36:13.522925 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:36:13.531832 kernel: audit: type=1106 audit(1768872973.501:838): pid=5352 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.531933 kernel: audit: type=1104 audit(1768872973.501:839): pid=5352 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.501000 audit[5352]: CRED_DISP pid=5352 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:13.536122 systemd-logind[1578]: Removed session 20. Jan 20 01:36:13.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.144:22-10.0.0.1:33284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:16.699602 kubelet[2780]: E0120 01:36:16.699044 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:36:17.701388 kubelet[2780]: E0120 01:36:17.700507 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:36:18.538765 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:33288.service - OpenSSH per-connection server daemon (10.0.0.1:33288). Jan 20 01:36:18.543847 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:36:18.543953 kernel: audit: type=1130 audit(1768872978.537:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.144:22-10.0.0.1:33288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:18.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.144:22-10.0.0.1:33288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:18.700151 kubelet[2780]: E0120 01:36:18.698511 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:36:18.786000 audit[5396]: USER_ACCT pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:18.794361 sshd[5396]: Accepted publickey for core from 10.0.0.1 port 33288 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:18.823050 kernel: audit: type=1101 audit(1768872978.786:842): pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:18.823393 kernel: audit: type=1103 audit(1768872978.802:843): pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:18.802000 audit[5396]: CRED_ACQ pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:18.808043 sshd-session[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:18.823338 systemd-logind[1578]: New session 21 of user core. Jan 20 01:36:18.802000 audit[5396]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff659e3a90 a2=3 a3=0 items=0 ppid=1 pid=5396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:18.854017 kernel: audit: type=1006 audit(1768872978.802:844): pid=5396 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 20 01:36:18.854440 kernel: audit: type=1300 audit(1768872978.802:844): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff659e3a90 a2=3 a3=0 items=0 ppid=1 pid=5396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:18.854573 kernel: audit: type=1327 audit(1768872978.802:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:18.802000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:18.855910 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:36:18.868000 audit[5396]: USER_START pid=5396 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:18.905652 kernel: audit: type=1105 audit(1768872978.868:845): pid=5396 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:18.905855 kernel: audit: type=1103 audit(1768872978.878:846): pid=5400 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:18.878000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:19.204982 sshd[5400]: Connection closed by 10.0.0.1 port 33288 Jan 20 01:36:19.206000 audit[5396]: USER_END pid=5396 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:19.222515 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:33288.service: Deactivated successfully. Jan 20 01:36:19.205446 sshd-session[5396]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:19.234950 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:36:19.206000 audit[5396]: CRED_DISP pid=5396 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:19.240226 systemd-logind[1578]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:36:19.247976 systemd-logind[1578]: Removed session 21. Jan 20 01:36:19.254233 kernel: audit: type=1106 audit(1768872979.206:847): pid=5396 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:19.254385 kernel: audit: type=1104 audit(1768872979.206:848): pid=5396 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:19.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.144:22-10.0.0.1:33288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:21.702833 kubelet[2780]: E0120 01:36:21.701980 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:36:22.701900 kubelet[2780]: E0120 01:36:22.699214 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:36:23.734075 kubelet[2780]: E0120 01:36:23.733296 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:36:24.237581 systemd[1]: Started sshd@20-10.0.0.144:22-10.0.0.1:43814.service - OpenSSH per-connection server daemon (10.0.0.1:43814). Jan 20 01:36:24.267614 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:36:24.267672 kernel: audit: type=1130 audit(1768872984.236:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.144:22-10.0.0.1:43814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:24.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.144:22-10.0.0.1:43814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:24.406000 audit[5414]: USER_ACCT pid=5414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:24.411348 sshd[5414]: Accepted publickey for core from 10.0.0.1 port 43814 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:24.413351 sshd-session[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:24.423927 kernel: audit: type=1101 audit(1768872984.406:851): pid=5414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:24.424127 kernel: audit: type=1103 audit(1768872984.410:852): pid=5414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:24.410000 audit[5414]: CRED_ACQ pid=5414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:24.436226 systemd-logind[1578]: New session 22 of user core. Jan 20 01:36:24.447856 kernel: audit: type=1006 audit(1768872984.410:853): pid=5414 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 01:36:24.410000 audit[5414]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8e7c8370 a2=3 a3=0 items=0 ppid=1 pid=5414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:24.471843 kernel: audit: type=1300 audit(1768872984.410:853): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8e7c8370 a2=3 a3=0 items=0 ppid=1 pid=5414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:24.471997 kernel: audit: type=1327 audit(1768872984.410:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:24.410000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:24.483620 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:36:24.495000 audit[5414]: USER_START pid=5414 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:24.500000 audit[5418]: CRED_ACQ pid=5418 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:24.535921 kernel: audit: type=1105 audit(1768872984.495:854): pid=5414 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:24.536060 kernel: audit: type=1103 audit(1768872984.500:855): pid=5418 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:24.700974 kubelet[2780]: E0120 01:36:24.699614 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:24.755162 sshd[5418]: Connection closed by 10.0.0.1 port 43814 Jan 20 01:36:24.756645 sshd-session[5414]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:24.798428 kernel: audit: type=1106 audit(1768872984.763:856): pid=5414 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:24.763000 audit[5414]: USER_END pid=5414 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:24.779313 systemd[1]: sshd@20-10.0.0.144:22-10.0.0.1:43814.service: Deactivated successfully. Jan 20 01:36:24.798403 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:36:24.763000 audit[5414]: CRED_DISP pid=5414 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:24.805435 systemd-logind[1578]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:36:24.809432 systemd-logind[1578]: Removed session 22. Jan 20 01:36:24.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.144:22-10.0.0.1:43814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:24.813238 kernel: audit: type=1104 audit(1768872984.763:857): pid=5414 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:25.704068 kubelet[2780]: E0120 01:36:25.703465 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:27.721927 kubelet[2780]: E0120 01:36:27.721837 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:36:28.699056 kubelet[2780]: E0120 01:36:28.698955 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:36:29.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.144:22-10.0.0.1:43820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:29.801459 systemd[1]: Started sshd@21-10.0.0.144:22-10.0.0.1:43820.service - OpenSSH per-connection server daemon (10.0.0.1:43820). Jan 20 01:36:29.811169 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:36:29.811285 kernel: audit: type=1130 audit(1768872989.799:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.144:22-10.0.0.1:43820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:29.980000 audit[5432]: USER_ACCT pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.005813 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 43820 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:30.006451 kernel: audit: type=1101 audit(1768872989.980:860): pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.009787 sshd-session[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:30.006000 audit[5432]: CRED_ACQ pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.024157 kernel: audit: type=1103 audit(1768872990.006:861): pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.032179 kernel: audit: type=1006 audit(1768872990.006:862): pid=5432 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 20 01:36:30.032289 kernel: audit: type=1300 audit(1768872990.006:862): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3c822890 a2=3 a3=0 items=0 ppid=1 pid=5432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:30.006000 audit[5432]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3c822890 a2=3 a3=0 items=0 ppid=1 pid=5432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:30.034428 systemd-logind[1578]: New session 23 of user core. Jan 20 01:36:30.006000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:30.046204 kernel: audit: type=1327 audit(1768872990.006:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:30.054448 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:36:30.066000 audit[5432]: USER_START pid=5432 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.078000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.104391 kernel: audit: type=1105 audit(1768872990.066:863): pid=5432 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.105693 kernel: audit: type=1103 audit(1768872990.078:864): pid=5436 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.293152 sshd[5436]: Connection closed by 10.0.0.1 port 43820 Jan 20 01:36:30.294135 sshd-session[5432]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:30.302000 audit[5432]: USER_END pid=5432 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.309920 systemd[1]: sshd@21-10.0.0.144:22-10.0.0.1:43820.service: Deactivated successfully. Jan 20 01:36:30.317505 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:36:30.319566 systemd-logind[1578]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:36:30.323162 systemd-logind[1578]: Removed session 23. Jan 20 01:36:30.331178 kernel: audit: type=1106 audit(1768872990.302:865): pid=5432 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.333288 kernel: audit: type=1104 audit(1768872990.302:866): pid=5432 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.302000 audit[5432]: CRED_DISP pid=5432 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:30.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.144:22-10.0.0.1:43820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:30.698805 kubelet[2780]: E0120 01:36:30.698632 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:36:31.726498 kubelet[2780]: E0120 01:36:31.723856 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:36:32.701877 kubelet[2780]: E0120 01:36:32.700524 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:32.703956 kubelet[2780]: E0120 01:36:32.702431 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:33.704817 kubelet[2780]: E0120 01:36:33.704709 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:33.709674 kubelet[2780]: E0120 01:36:33.709541 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:36:34.702052 kubelet[2780]: E0120 01:36:34.701949 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:36:35.324155 systemd[1]: Started sshd@22-10.0.0.144:22-10.0.0.1:36272.service - OpenSSH per-connection server daemon (10.0.0.1:36272). Jan 20 01:36:35.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.144:22-10.0.0.1:36272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:35.329553 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:36:35.329609 kernel: audit: type=1130 audit(1768872995.325:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.144:22-10.0.0.1:36272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:35.466000 audit[5453]: USER_ACCT pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.470174 sshd[5453]: Accepted publickey for core from 10.0.0.1 port 36272 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:35.473807 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:35.470000 audit[5453]: CRED_ACQ pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.488183 systemd-logind[1578]: New session 24 of user core. Jan 20 01:36:35.497696 kernel: audit: type=1101 audit(1768872995.466:869): pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.497875 kernel: audit: type=1103 audit(1768872995.470:870): pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.507555 kernel: audit: type=1006 audit(1768872995.470:871): pid=5453 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 01:36:35.470000 audit[5453]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe92858890 a2=3 a3=0 items=0 ppid=1 pid=5453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:35.519308 kernel: audit: type=1300 audit(1768872995.470:871): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe92858890 a2=3 a3=0 items=0 ppid=1 pid=5453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:35.470000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:35.524060 kernel: audit: type=1327 audit(1768872995.470:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:35.527282 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:36:35.537000 audit[5453]: USER_START pid=5453 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.544000 audit[5457]: CRED_ACQ pid=5457 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.567389 kernel: audit: type=1105 audit(1768872995.537:872): pid=5453 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.567525 kernel: audit: type=1103 audit(1768872995.544:873): pid=5457 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.770072 sshd[5457]: Connection closed by 10.0.0.1 port 36272 Jan 20 01:36:35.770542 sshd-session[5453]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:35.773000 audit[5453]: USER_END pid=5453 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.781604 systemd[1]: sshd@22-10.0.0.144:22-10.0.0.1:36272.service: Deactivated successfully. Jan 20 01:36:35.790596 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:36:35.773000 audit[5453]: CRED_DISP pid=5453 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.799129 systemd-logind[1578]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:36:35.802842 kernel: audit: type=1106 audit(1768872995.773:874): pid=5453 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.802951 kernel: audit: type=1104 audit(1768872995.773:875): pid=5453 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:35.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.144:22-10.0.0.1:36272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:35.802554 systemd-logind[1578]: Removed session 24. Jan 20 01:36:38.701774 kubelet[2780]: E0120 01:36:38.701672 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:36:40.801808 systemd[1]: Started sshd@23-10.0.0.144:22-10.0.0.1:36276.service - OpenSSH per-connection server daemon (10.0.0.1:36276). Jan 20 01:36:40.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.144:22-10.0.0.1:36276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:40.811652 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:36:40.811934 kernel: audit: type=1130 audit(1768873000.800:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.144:22-10.0.0.1:36276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:40.994000 audit[5472]: USER_ACCT pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.002447 sshd[5472]: Accepted publickey for core from 10.0.0.1 port 36276 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:41.002389 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:40.997000 audit[5472]: CRED_ACQ pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.021500 systemd-logind[1578]: New session 25 of user core. Jan 20 01:36:41.034290 kernel: audit: type=1101 audit(1768873000.994:878): pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.034442 kernel: audit: type=1103 audit(1768873000.997:879): pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.034496 kernel: audit: type=1006 audit(1768873000.997:880): pid=5472 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 20 01:36:40.997000 audit[5472]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd42bfac90 a2=3 a3=0 items=0 ppid=1 pid=5472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:41.058309 kernel: audit: type=1300 audit(1768873000.997:880): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd42bfac90 a2=3 a3=0 items=0 ppid=1 pid=5472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:41.060902 kernel: audit: type=1327 audit(1768873000.997:880): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:40.997000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:41.067686 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:36:41.073000 audit[5472]: USER_START pid=5472 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.093161 kernel: audit: type=1105 audit(1768873001.073:881): pid=5472 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.077000 audit[5476]: CRED_ACQ pid=5476 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.110197 kernel: audit: type=1103 audit(1768873001.077:882): pid=5476 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.330245 sshd[5476]: Connection closed by 10.0.0.1 port 36276 Jan 20 01:36:41.333061 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:41.333000 audit[5472]: USER_END pid=5472 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.340870 systemd[1]: sshd@23-10.0.0.144:22-10.0.0.1:36276.service: Deactivated successfully. Jan 20 01:36:41.346380 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:36:41.333000 audit[5472]: CRED_DISP pid=5472 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.354795 systemd-logind[1578]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:36:41.358632 systemd-logind[1578]: Removed session 25. Jan 20 01:36:41.365800 kernel: audit: type=1106 audit(1768873001.333:883): pid=5472 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.365922 kernel: audit: type=1104 audit(1768873001.333:884): pid=5472 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:41.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.144:22-10.0.0.1:36276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:42.702891 kubelet[2780]: E0120 01:36:42.700999 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:36:42.702891 kubelet[2780]: E0120 01:36:42.702072 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:36:44.698933 kubelet[2780]: E0120 01:36:44.698730 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:36:45.701248 kubelet[2780]: E0120 01:36:45.701143 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:36:46.359034 systemd[1]: Started sshd@24-10.0.0.144:22-10.0.0.1:40548.service - OpenSSH per-connection server daemon (10.0.0.1:40548). Jan 20 01:36:46.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.144:22-10.0.0.1:40548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:46.369170 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:36:46.369318 kernel: audit: type=1130 audit(1768873006.358:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.144:22-10.0.0.1:40548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:46.479612 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 40548 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:46.496265 kernel: audit: type=1101 audit(1768873006.478:887): pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.478000 audit[5492]: USER_ACCT pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.490589 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:46.481000 audit[5492]: CRED_ACQ pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.509049 kernel: audit: type=1103 audit(1768873006.481:888): pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.511532 systemd-logind[1578]: New session 26 of user core. Jan 20 01:36:46.481000 audit[5492]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd77c30b40 a2=3 a3=0 items=0 ppid=1 pid=5492 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:46.536252 kernel: audit: type=1006 audit(1768873006.481:889): pid=5492 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 20 01:36:46.536468 kernel: audit: type=1300 audit(1768873006.481:889): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd77c30b40 a2=3 a3=0 items=0 ppid=1 pid=5492 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:46.481000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:46.543004 kernel: audit: type=1327 audit(1768873006.481:889): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:46.548333 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:36:46.557000 audit[5492]: USER_START pid=5492 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.583855 kernel: audit: type=1105 audit(1768873006.557:890): pid=5492 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.584002 kernel: audit: type=1103 audit(1768873006.558:891): pid=5496 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.558000 audit[5496]: CRED_ACQ pid=5496 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.700931 kubelet[2780]: E0120 01:36:46.700833 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:46.705601 kubelet[2780]: E0120 01:36:46.704916 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:36:46.706700 kubelet[2780]: E0120 01:36:46.705846 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:36:46.746169 sshd[5496]: Connection closed by 10.0.0.1 port 40548 Jan 20 01:36:46.750041 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:46.760000 audit[5492]: USER_END pid=5492 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.795774 kernel: audit: type=1106 audit(1768873006.760:892): pid=5492 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.760000 audit[5492]: CRED_DISP pid=5492 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.802648 systemd[1]: sshd@24-10.0.0.144:22-10.0.0.1:40548.service: Deactivated successfully. Jan 20 01:36:46.806059 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:36:46.818384 systemd-logind[1578]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:36:46.822158 kernel: audit: type=1104 audit(1768873006.760:893): pid=5492 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:46.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.144:22-10.0.0.1:40548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:46.822950 systemd-logind[1578]: Removed session 26. Jan 20 01:36:49.701389 kubelet[2780]: E0120 01:36:49.699735 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:36:51.765422 systemd[1]: Started sshd@25-10.0.0.144:22-10.0.0.1:40564.service - OpenSSH per-connection server daemon (10.0.0.1:40564). Jan 20 01:36:51.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.144:22-10.0.0.1:40564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:51.770439 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:36:51.770609 kernel: audit: type=1130 audit(1768873011.764:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.144:22-10.0.0.1:40564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:51.873356 sshd[5536]: Accepted publickey for core from 10.0.0.1 port 40564 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:51.871000 audit[5536]: USER_ACCT pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:51.878940 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:51.876000 audit[5536]: CRED_ACQ pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:51.898980 systemd-logind[1578]: New session 27 of user core. Jan 20 01:36:51.921189 kernel: audit: type=1101 audit(1768873011.871:896): pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:51.921358 kernel: audit: type=1103 audit(1768873011.876:897): pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:51.921415 kernel: audit: type=1006 audit(1768873011.876:898): pid=5536 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 20 01:36:51.876000 audit[5536]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1403af40 a2=3 a3=0 items=0 ppid=1 pid=5536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:51.955989 kernel: audit: type=1300 audit(1768873011.876:898): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1403af40 a2=3 a3=0 items=0 ppid=1 pid=5536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:51.956257 kernel: audit: type=1327 audit(1768873011.876:898): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:51.876000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:51.956383 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 01:36:51.969000 audit[5536]: USER_START pid=5536 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:51.999150 kernel: audit: type=1105 audit(1768873011.969:899): pid=5536 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:51.999276 kernel: audit: type=1103 audit(1768873011.990:900): pid=5540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:51.990000 audit[5540]: CRED_ACQ pid=5540 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:52.246143 sshd[5540]: Connection closed by 10.0.0.1 port 40564 Jan 20 01:36:52.247461 sshd-session[5536]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:52.249000 audit[5536]: USER_END pid=5536 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:52.249000 audit[5536]: CRED_DISP pid=5536 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:52.275193 kernel: audit: type=1106 audit(1768873012.249:901): pid=5536 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:52.275354 kernel: audit: type=1104 audit(1768873012.249:902): pid=5536 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:52.288961 systemd[1]: sshd@25-10.0.0.144:22-10.0.0.1:40564.service: Deactivated successfully. Jan 20 01:36:52.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.144:22-10.0.0.1:40564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:52.293228 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 01:36:52.299621 systemd-logind[1578]: Session 27 logged out. Waiting for processes to exit. Jan 20 01:36:52.302964 systemd[1]: Started sshd@26-10.0.0.144:22-10.0.0.1:40576.service - OpenSSH per-connection server daemon (10.0.0.1:40576). Jan 20 01:36:52.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.144:22-10.0.0.1:40576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:52.304660 systemd-logind[1578]: Removed session 27. Jan 20 01:36:52.416000 audit[5553]: USER_ACCT pid=5553 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:52.419875 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 40576 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:52.419000 audit[5553]: CRED_ACQ pid=5553 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:52.419000 audit[5553]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd10741f0 a2=3 a3=0 items=0 ppid=1 pid=5553 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:52.419000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:52.422540 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:52.450393 systemd-logind[1578]: New session 28 of user core. Jan 20 01:36:52.458283 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 01:36:52.484000 audit[5553]: USER_START pid=5553 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:52.488000 audit[5557]: CRED_ACQ pid=5557 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:53.235153 sshd[5557]: Connection closed by 10.0.0.1 port 40576 Jan 20 01:36:53.238685 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:53.241000 audit[5553]: USER_END pid=5553 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:53.242000 audit[5553]: CRED_DISP pid=5553 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:53.254428 systemd[1]: sshd@26-10.0.0.144:22-10.0.0.1:40576.service: Deactivated successfully. Jan 20 01:36:53.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.144:22-10.0.0.1:40576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:53.257417 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 01:36:53.269714 systemd-logind[1578]: Session 28 logged out. Waiting for processes to exit. Jan 20 01:36:53.275560 systemd[1]: Started sshd@27-10.0.0.144:22-10.0.0.1:54448.service - OpenSSH per-connection server daemon (10.0.0.1:54448). Jan 20 01:36:53.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.144:22-10.0.0.1:54448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:53.285174 systemd-logind[1578]: Removed session 28. Jan 20 01:36:53.491000 audit[5569]: USER_ACCT pid=5569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:53.497203 sshd[5569]: Accepted publickey for core from 10.0.0.1 port 54448 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:53.500000 audit[5569]: CRED_ACQ pid=5569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:53.503000 audit[5569]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2f2a72f0 a2=3 a3=0 items=0 ppid=1 pid=5569 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:53.503000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:53.506470 sshd-session[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:53.528976 systemd-logind[1578]: New session 29 of user core. Jan 20 01:36:53.543324 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 01:36:53.552000 audit[5569]: USER_START pid=5569 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:53.558000 audit[5573]: CRED_ACQ pid=5573 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:54.703000 audit[5586]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:36:54.703000 audit[5586]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff21f8f660 a2=0 a3=7fff21f8f64c items=0 ppid=2936 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:54.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:36:54.718000 audit[5586]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:36:54.718000 audit[5586]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff21f8f660 a2=0 a3=0 items=0 ppid=2936 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:54.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:36:54.729061 sshd[5573]: Connection closed by 10.0.0.1 port 54448 Jan 20 01:36:54.732618 sshd-session[5569]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:54.739000 audit[5569]: USER_END pid=5569 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:54.739000 audit[5569]: CRED_DISP pid=5569 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:54.744711 systemd[1]: Started sshd@28-10.0.0.144:22-10.0.0.1:54464.service - OpenSSH per-connection server daemon (10.0.0.1:54464). Jan 20 01:36:54.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.144:22-10.0.0.1:54464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:54.748224 systemd[1]: sshd@27-10.0.0.144:22-10.0.0.1:54448.service: Deactivated successfully. Jan 20 01:36:54.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.144:22-10.0.0.1:54448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:54.753952 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 01:36:54.759386 systemd-logind[1578]: Session 29 logged out. Waiting for processes to exit. Jan 20 01:36:54.764278 systemd-logind[1578]: Removed session 29. Jan 20 01:36:54.774000 audit[5593]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:36:54.774000 audit[5593]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffebb3b71d0 a2=0 a3=7ffebb3b71bc items=0 ppid=2936 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:54.774000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:36:54.798000 audit[5593]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:36:54.798000 audit[5593]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffebb3b71d0 a2=0 a3=0 items=0 ppid=2936 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:54.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:36:54.901000 audit[5589]: USER_ACCT pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:54.902528 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 54464 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:54.907000 audit[5589]: CRED_ACQ pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:54.907000 audit[5589]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdab249920 a2=3 a3=0 items=0 ppid=1 pid=5589 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:54.907000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:54.910971 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:54.929790 systemd-logind[1578]: New session 30 of user core. Jan 20 01:36:54.945381 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 01:36:54.953000 audit[5589]: USER_START pid=5589 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:54.957000 audit[5597]: CRED_ACQ pid=5597 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:55.499843 sshd[5597]: Connection closed by 10.0.0.1 port 54464 Jan 20 01:36:55.500298 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:55.503000 audit[5589]: USER_END pid=5589 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:55.503000 audit[5589]: CRED_DISP pid=5589 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:55.528050 systemd[1]: sshd@28-10.0.0.144:22-10.0.0.1:54464.service: Deactivated successfully. Jan 20 01:36:55.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.144:22-10.0.0.1:54464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:55.534993 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 01:36:55.539030 systemd-logind[1578]: Session 30 logged out. Waiting for processes to exit. Jan 20 01:36:55.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.144:22-10.0.0.1:54466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:55.550701 systemd[1]: Started sshd@29-10.0.0.144:22-10.0.0.1:54466.service - OpenSSH per-connection server daemon (10.0.0.1:54466). Jan 20 01:36:55.552364 systemd-logind[1578]: Removed session 30. Jan 20 01:36:55.689000 audit[5609]: USER_ACCT pid=5609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:55.692000 audit[5609]: CRED_ACQ pid=5609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:55.692000 audit[5609]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc5679ac0 a2=3 a3=0 items=0 ppid=1 pid=5609 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:36:55.692000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:36:55.694583 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 54466 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:36:55.697326 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:55.702589 kubelet[2780]: E0120 01:36:55.702051 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:36:55.714321 systemd-logind[1578]: New session 31 of user core. Jan 20 01:36:55.724564 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 01:36:55.730000 audit[5609]: USER_START pid=5609 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:55.736000 audit[5614]: CRED_ACQ pid=5614 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:55.985180 sshd[5614]: Connection closed by 10.0.0.1 port 54466 Jan 20 01:36:55.985792 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:55.993000 audit[5609]: USER_END pid=5609 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:55.994000 audit[5609]: CRED_DISP pid=5609 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:36:56.008227 systemd[1]: sshd@29-10.0.0.144:22-10.0.0.1:54466.service: Deactivated successfully. Jan 20 01:36:56.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.144:22-10.0.0.1:54466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:36:56.014872 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 01:36:56.021352 systemd-logind[1578]: Session 31 logged out. Waiting for processes to exit. Jan 20 01:36:56.026170 systemd-logind[1578]: Removed session 31. Jan 20 01:36:56.698039 kubelet[2780]: E0120 01:36:56.697949 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:56.706046 kubelet[2780]: E0120 01:36:56.705856 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:36:56.708150 kubelet[2780]: E0120 01:36:56.707974 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:36:57.705929 kubelet[2780]: E0120 01:36:57.702549 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:37:01.011580 systemd[1]: Started sshd@30-10.0.0.144:22-10.0.0.1:54468.service - OpenSSH per-connection server daemon (10.0.0.1:54468). Jan 20 01:37:01.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.144:22-10.0.0.1:54468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:01.031667 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 20 01:37:01.032827 kernel: audit: type=1130 audit(1768873021.012:944): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.144:22-10.0.0.1:54468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:01.159000 audit[5633]: USER_ACCT pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.169432 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:01.179795 sshd[5633]: Accepted publickey for core from 10.0.0.1 port 54468 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:37:01.210282 kernel: audit: type=1101 audit(1768873021.159:945): pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.210468 kernel: audit: type=1103 audit(1768873021.165:946): pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.210503 kernel: audit: type=1006 audit(1768873021.165:947): pid=5633 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 20 01:37:01.165000 audit[5633]: CRED_ACQ pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.165000 audit[5633]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5fa33fe0 a2=3 a3=0 items=0 ppid=1 pid=5633 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:01.229213 systemd-logind[1578]: New session 32 of user core. Jan 20 01:37:01.240394 kernel: audit: type=1300 audit(1768873021.165:947): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5fa33fe0 a2=3 a3=0 items=0 ppid=1 pid=5633 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:01.247851 kernel: audit: type=1327 audit(1768873021.165:947): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:01.165000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:01.253509 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 01:37:01.263000 audit[5633]: USER_START pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.300211 kernel: audit: type=1105 audit(1768873021.263:948): pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.270000 audit[5638]: CRED_ACQ pid=5638 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.319212 kernel: audit: type=1103 audit(1768873021.270:949): pid=5638 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.531919 sshd[5638]: Connection closed by 10.0.0.1 port 54468 Jan 20 01:37:01.529000 audit[5633]: USER_END pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.528384 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:01.578165 kernel: audit: type=1106 audit(1768873021.529:950): pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.578307 kernel: audit: type=1104 audit(1768873021.531:951): pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.531000 audit[5633]: CRED_DISP pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:01.565703 systemd-logind[1578]: Session 32 logged out. Waiting for processes to exit. Jan 20 01:37:01.568706 systemd[1]: sshd@30-10.0.0.144:22-10.0.0.1:54468.service: Deactivated successfully. Jan 20 01:37:01.577070 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 01:37:01.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.144:22-10.0.0.1:54468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:01.591327 systemd-logind[1578]: Removed session 32. Jan 20 01:37:01.721159 kubelet[2780]: E0120 01:37:01.718374 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:37:01.721159 kubelet[2780]: E0120 01:37:01.720488 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:37:04.703640 kubelet[2780]: E0120 01:37:04.701624 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:37:06.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.144:22-10.0.0.1:46196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:06.562268 systemd[1]: Started sshd@31-10.0.0.144:22-10.0.0.1:46196.service - OpenSSH per-connection server daemon (10.0.0.1:46196). Jan 20 01:37:06.572206 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:37:06.572333 kernel: audit: type=1130 audit(1768873026.561:953): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.144:22-10.0.0.1:46196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:06.714000 audit[5653]: USER_ACCT pid=5653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:06.716250 sshd[5653]: Accepted publickey for core from 10.0.0.1 port 46196 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:37:06.724727 sshd-session[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:06.734666 kernel: audit: type=1101 audit(1768873026.714:954): pid=5653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:06.734863 kernel: audit: type=1103 audit(1768873026.721:955): pid=5653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:06.721000 audit[5653]: CRED_ACQ pid=5653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:06.750287 systemd-logind[1578]: New session 33 of user core. Jan 20 01:37:06.752548 kernel: audit: type=1006 audit(1768873026.721:956): pid=5653 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 20 01:37:06.752606 kernel: audit: type=1300 audit(1768873026.721:956): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff223f1e10 a2=3 a3=0 items=0 ppid=1 pid=5653 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:06.721000 audit[5653]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff223f1e10 a2=3 a3=0 items=0 ppid=1 pid=5653 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:06.721000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:06.771221 kernel: audit: type=1327 audit(1768873026.721:956): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:06.772833 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 01:37:06.786000 audit[5653]: USER_START pid=5653 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:06.822864 kernel: audit: type=1105 audit(1768873026.786:957): pid=5653 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:06.803000 audit[5657]: CRED_ACQ pid=5657 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:06.842227 kernel: audit: type=1103 audit(1768873026.803:958): pid=5657 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:07.118685 sshd[5657]: Connection closed by 10.0.0.1 port 46196 Jan 20 01:37:07.122353 sshd-session[5653]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:07.130000 audit[5653]: USER_END pid=5653 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:07.141954 systemd[1]: sshd@31-10.0.0.144:22-10.0.0.1:46196.service: Deactivated successfully. Jan 20 01:37:07.146168 systemd-logind[1578]: Session 33 logged out. Waiting for processes to exit. Jan 20 01:37:07.147281 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 01:37:07.153173 kernel: audit: type=1106 audit(1768873027.130:959): pid=5653 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:07.130000 audit[5653]: CRED_DISP pid=5653 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:07.158797 systemd-logind[1578]: Removed session 33. Jan 20 01:37:07.170277 kernel: audit: type=1104 audit(1768873027.130:960): pid=5653 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:07.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.144:22-10.0.0.1:46196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:08.702687 containerd[1611]: time="2026-01-20T01:37:08.702304004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:37:08.814196 containerd[1611]: time="2026-01-20T01:37:08.813269164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:37:08.815503 containerd[1611]: time="2026-01-20T01:37:08.815345610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:37:08.815503 containerd[1611]: time="2026-01-20T01:37:08.815475663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:37:08.816932 kubelet[2780]: E0120 01:37:08.815735 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:37:08.816932 kubelet[2780]: E0120 01:37:08.815882 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:37:08.827838 kubelet[2780]: E0120 01:37:08.822995 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:50d0da9d3db140cc8836270eb3a85a60,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mchlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b9db9c79-llb9v_calico-system(49316a51-69bf-4cd8-a713-083d988333bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:08.829123 containerd[1611]: time="2026-01-20T01:37:08.828987557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:37:08.924213 containerd[1611]: time="2026-01-20T01:37:08.924065635Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:37:08.932930 containerd[1611]: time="2026-01-20T01:37:08.930267483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:37:08.932930 containerd[1611]: time="2026-01-20T01:37:08.930398707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:37:08.933287 kubelet[2780]: E0120 01:37:08.930745 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:37:08.933287 kubelet[2780]: E0120 01:37:08.930841 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:37:08.933287 kubelet[2780]: E0120 01:37:08.930979 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mchlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b9db9c79-llb9v_calico-system(49316a51-69bf-4cd8-a713-083d988333bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:08.937355 kubelet[2780]: E0120 01:37:08.937217 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:37:09.721512 kubelet[2780]: E0120 01:37:09.721350 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:10.703049 kubelet[2780]: E0120 01:37:10.701553 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:37:11.710180 kubelet[2780]: E0120 01:37:11.709509 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:37:12.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.144:22-10.0.0.1:46210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:12.142295 systemd[1]: Started sshd@32-10.0.0.144:22-10.0.0.1:46210.service - OpenSSH per-connection server daemon (10.0.0.1:46210). Jan 20 01:37:12.144857 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:37:12.144983 kernel: audit: type=1130 audit(1768873032.141:962): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.144:22-10.0.0.1:46210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:12.287000 audit[5671]: USER_ACCT pid=5671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.297960 sshd[5671]: Accepted publickey for core from 10.0.0.1 port 46210 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:37:12.301306 sshd-session[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:12.294000 audit[5671]: CRED_ACQ pid=5671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.319155 kernel: audit: type=1101 audit(1768873032.287:963): pid=5671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.319297 kernel: audit: type=1103 audit(1768873032.294:964): pid=5671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.319197 systemd-logind[1578]: New session 34 of user core. Jan 20 01:37:12.329839 kernel: audit: type=1006 audit(1768873032.295:965): pid=5671 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 20 01:37:12.295000 audit[5671]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb3b77940 a2=3 a3=0 items=0 ppid=1 pid=5671 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:12.348503 kernel: audit: type=1300 audit(1768873032.295:965): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb3b77940 a2=3 a3=0 items=0 ppid=1 pid=5671 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:12.295000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:12.350432 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 01:37:12.356871 kernel: audit: type=1327 audit(1768873032.295:965): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:12.368000 audit[5671]: USER_START pid=5671 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.386000 audit[5675]: CRED_ACQ pid=5675 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.399603 kernel: audit: type=1105 audit(1768873032.368:966): pid=5671 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.399819 kernel: audit: type=1103 audit(1768873032.386:967): pid=5675 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.632986 sshd[5675]: Connection closed by 10.0.0.1 port 46210 Jan 20 01:37:12.634043 sshd-session[5671]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:12.641000 audit[5671]: USER_END pid=5671 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.650065 systemd[1]: sshd@32-10.0.0.144:22-10.0.0.1:46210.service: Deactivated successfully. Jan 20 01:37:12.655530 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 01:37:12.641000 audit[5671]: CRED_DISP pid=5671 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.661203 systemd-logind[1578]: Session 34 logged out. Waiting for processes to exit. Jan 20 01:37:12.664253 systemd-logind[1578]: Removed session 34. Jan 20 01:37:12.665894 kernel: audit: type=1106 audit(1768873032.641:968): pid=5671 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.665972 kernel: audit: type=1104 audit(1768873032.641:969): pid=5671 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:12.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.144:22-10.0.0.1:46210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:12.698536 containerd[1611]: time="2026-01-20T01:37:12.698486054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:37:12.765743 containerd[1611]: time="2026-01-20T01:37:12.765415677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:37:12.770966 containerd[1611]: time="2026-01-20T01:37:12.770745937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:37:12.770966 containerd[1611]: time="2026-01-20T01:37:12.770934660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:37:12.772999 kubelet[2780]: E0120 01:37:12.771350 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:37:12.772999 kubelet[2780]: E0120 01:37:12.771429 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:37:12.772999 kubelet[2780]: E0120 01:37:12.771600 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmwd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rs9sl_calico-system(93c423b9-f734-475b-aea9-f003af7097a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:12.773948 kubelet[2780]: E0120 01:37:12.773884 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:37:13.707517 containerd[1611]: time="2026-01-20T01:37:13.707049389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:37:13.796667 containerd[1611]: time="2026-01-20T01:37:13.796269931Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:37:13.800011 containerd[1611]: time="2026-01-20T01:37:13.799838532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:37:13.800011 containerd[1611]: time="2026-01-20T01:37:13.799967483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:37:13.801056 kubelet[2780]: E0120 01:37:13.800381 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:37:13.801056 kubelet[2780]: E0120 01:37:13.800441 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:37:13.801056 kubelet[2780]: E0120 01:37:13.800601 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79rhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8dd7d667-prz7k_calico-apiserver(c55441d4-7803-4009-82ca-ee9ec6a88be8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:13.803600 kubelet[2780]: E0120 01:37:13.802482 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:37:14.704996 containerd[1611]: time="2026-01-20T01:37:14.704905808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:37:14.765056 containerd[1611]: time="2026-01-20T01:37:14.764917268Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:37:14.768288 containerd[1611]: time="2026-01-20T01:37:14.768013288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:37:14.768288 containerd[1611]: time="2026-01-20T01:37:14.768243226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:37:14.768776 kubelet[2780]: E0120 01:37:14.768686 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:37:14.771671 kubelet[2780]: E0120 01:37:14.771592 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:37:14.772039 kubelet[2780]: E0120 01:37:14.771833 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:14.774792 containerd[1611]: time="2026-01-20T01:37:14.774644184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:37:14.844421 containerd[1611]: time="2026-01-20T01:37:14.844325488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:37:14.847350 containerd[1611]: time="2026-01-20T01:37:14.847269889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:37:14.847466 containerd[1611]: time="2026-01-20T01:37:14.847371058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:37:14.848893 kubelet[2780]: E0120 01:37:14.847788 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:37:14.848893 kubelet[2780]: E0120 01:37:14.847872 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:37:14.848893 kubelet[2780]: E0120 01:37:14.848021 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72vls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phdz7_calico-system(164d51f9-eed6-48ef-9188-a78d4106afb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:14.849749 kubelet[2780]: E0120 01:37:14.849387 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:37:17.669887 systemd[1]: Started sshd@33-10.0.0.144:22-10.0.0.1:56344.service - OpenSSH per-connection server daemon (10.0.0.1:56344). Jan 20 01:37:17.674940 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:37:17.675041 kernel: audit: type=1130 audit(1768873037.670:971): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.144:22-10.0.0.1:56344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:17.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.144:22-10.0.0.1:56344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:17.844000 audit[5690]: USER_ACCT pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:17.852375 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 56344 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:37:17.854918 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:17.875662 kernel: audit: type=1101 audit(1768873037.844:972): pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:17.875824 kernel: audit: type=1103 audit(1768873037.849:973): pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:17.849000 audit[5690]: CRED_ACQ pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:17.884009 systemd-logind[1578]: New session 35 of user core. Jan 20 01:37:17.900538 kernel: audit: type=1006 audit(1768873037.849:974): pid=5690 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 20 01:37:17.900669 kernel: audit: type=1300 audit(1768873037.849:974): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff22b20850 a2=3 a3=0 items=0 ppid=1 pid=5690 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:17.849000 audit[5690]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff22b20850 a2=3 a3=0 items=0 ppid=1 pid=5690 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:17.919383 kernel: audit: type=1327 audit(1768873037.849:974): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:17.849000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:17.928745 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 01:37:17.939000 audit[5690]: USER_START pid=5690 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:17.962279 kernel: audit: type=1105 audit(1768873037.939:975): pid=5690 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:17.944000 audit[5694]: CRED_ACQ pid=5694 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:17.980180 kernel: audit: type=1103 audit(1768873037.944:976): pid=5694 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:18.108150 sshd[5694]: Connection closed by 10.0.0.1 port 56344 Jan 20 01:37:18.109373 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:18.114000 audit[5690]: USER_END pid=5690 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:18.127164 systemd[1]: sshd@33-10.0.0.144:22-10.0.0.1:56344.service: Deactivated successfully. Jan 20 01:37:18.130667 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 01:37:18.133249 kernel: audit: type=1106 audit(1768873038.114:977): pid=5690 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:18.114000 audit[5690]: CRED_DISP pid=5690 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:18.134867 systemd-logind[1578]: Session 35 logged out. Waiting for processes to exit. Jan 20 01:37:18.137335 systemd-logind[1578]: Removed session 35. Jan 20 01:37:18.144390 kernel: audit: type=1104 audit(1768873038.114:978): pid=5690 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:18.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.144:22-10.0.0.1:56344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:19.179000 audit[5734]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:37:19.179000 audit[5734]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff63f17750 a2=0 a3=7fff63f1773c items=0 ppid=2936 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:19.179000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:37:19.194000 audit[5734]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:37:19.194000 audit[5734]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff63f17750 a2=0 a3=7fff63f1773c items=0 ppid=2936 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:19.194000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:37:19.703388 containerd[1611]: time="2026-01-20T01:37:19.702276698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:37:19.775951 containerd[1611]: time="2026-01-20T01:37:19.775645416Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:37:19.779807 containerd[1611]: time="2026-01-20T01:37:19.779546529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:37:19.780347 containerd[1611]: time="2026-01-20T01:37:19.779934364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:37:19.781016 kubelet[2780]: E0120 01:37:19.780845 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:37:19.781016 kubelet[2780]: E0120 01:37:19.781007 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:37:19.781592 kubelet[2780]: E0120 01:37:19.781365 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzvw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8dd7d667-ct8ff_calico-apiserver(c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:19.783147 kubelet[2780]: E0120 01:37:19.782804 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d" Jan 20 01:37:22.702051 kubelet[2780]: E0120 01:37:22.701988 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:37:23.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.144:22-10.0.0.1:37432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:23.127582 systemd[1]: Started sshd@34-10.0.0.144:22-10.0.0.1:37432.service - OpenSSH per-connection server daemon (10.0.0.1:37432). Jan 20 01:37:23.130234 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 01:37:23.130283 kernel: audit: type=1130 audit(1768873043.126:982): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.144:22-10.0.0.1:37432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:23.228000 audit[5737]: USER_ACCT pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.239462 sshd[5737]: Accepted publickey for core from 10.0.0.1 port 37432 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:37:23.244149 kernel: audit: type=1101 audit(1768873043.228:983): pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.244000 audit[5737]: CRED_ACQ pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.249599 sshd-session[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:23.255166 kernel: audit: type=1103 audit(1768873043.244:984): pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.255231 kernel: audit: type=1006 audit(1768873043.244:985): pid=5737 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 20 01:37:23.244000 audit[5737]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff30f4a230 a2=3 a3=0 items=0 ppid=1 pid=5737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:23.269678 systemd-logind[1578]: New session 36 of user core. Jan 20 01:37:23.244000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:23.294184 kernel: audit: type=1300 audit(1768873043.244:985): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff30f4a230 a2=3 a3=0 items=0 ppid=1 pid=5737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:23.294298 kernel: audit: type=1327 audit(1768873043.244:985): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:23.298324 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 01:37:23.302000 audit[5737]: USER_START pid=5737 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.331424 kernel: audit: type=1105 audit(1768873043.302:986): pid=5737 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.331522 kernel: audit: type=1103 audit(1768873043.302:987): pid=5741 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.302000 audit[5741]: CRED_ACQ pid=5741 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.477897 sshd[5741]: Connection closed by 10.0.0.1 port 37432 Jan 20 01:37:23.478354 sshd-session[5737]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:23.477000 audit[5737]: USER_END pid=5737 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.487436 systemd[1]: sshd@34-10.0.0.144:22-10.0.0.1:37432.service: Deactivated successfully. Jan 20 01:37:23.493593 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 01:37:23.481000 audit[5737]: CRED_DISP pid=5737 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.499004 systemd-logind[1578]: Session 36 logged out. Waiting for processes to exit. Jan 20 01:37:23.500844 systemd-logind[1578]: Removed session 36. Jan 20 01:37:23.512445 kernel: audit: type=1106 audit(1768873043.477:988): pid=5737 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.512666 kernel: audit: type=1104 audit(1768873043.481:989): pid=5737 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:23.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.144:22-10.0.0.1:37432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:23.705500 containerd[1611]: time="2026-01-20T01:37:23.705231284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:37:23.793599 containerd[1611]: time="2026-01-20T01:37:23.793421295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:37:23.796172 containerd[1611]: time="2026-01-20T01:37:23.796030908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:37:23.796757 containerd[1611]: time="2026-01-20T01:37:23.796172642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:37:23.797467 kubelet[2780]: E0120 01:37:23.797377 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:37:23.798004 kubelet[2780]: E0120 01:37:23.797466 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:37:23.799189 kubelet[2780]: E0120 01:37:23.799001 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mf2km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-dd7bff465-4rkgx_calico-apiserver(d9baf707-371f-47e4-9f67-1785bd6ba68b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:23.802227 kubelet[2780]: E0120 01:37:23.802167 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dd7bff465-4rkgx" podUID="d9baf707-371f-47e4-9f67-1785bd6ba68b" Jan 20 01:37:24.703689 containerd[1611]: time="2026-01-20T01:37:24.703444405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:37:24.786181 containerd[1611]: time="2026-01-20T01:37:24.786051316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:37:24.798590 containerd[1611]: time="2026-01-20T01:37:24.798417482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:37:24.799384 containerd[1611]: time="2026-01-20T01:37:24.799193358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:37:24.799894 kubelet[2780]: E0120 01:37:24.799709 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:37:24.799894 kubelet[2780]: E0120 01:37:24.799827 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:37:24.801221 kubelet[2780]: E0120 01:37:24.800199 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ffsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-947d9dcc-bp5fh_calico-system(e535c75b-4142-4085-8d9d-2841894e5fe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:24.801711 kubelet[2780]: E0120 01:37:24.801590 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-947d9dcc-bp5fh" podUID="e535c75b-4142-4085-8d9d-2841894e5fe8" Jan 20 01:37:26.700622 kubelet[2780]: E0120 01:37:26.700281 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phdz7" podUID="164d51f9-eed6-48ef-9188-a78d4106afb9" Jan 20 01:37:27.704280 kubelet[2780]: E0120 01:37:27.704151 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-prz7k" podUID="c55441d4-7803-4009-82ca-ee9ec6a88be8" Jan 20 01:37:27.708246 kubelet[2780]: E0120 01:37:27.704324 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rs9sl" podUID="93c423b9-f734-475b-aea9-f003af7097a2" Jan 20 01:37:28.498540 systemd[1]: Started sshd@35-10.0.0.144:22-10.0.0.1:37444.service - OpenSSH per-connection server daemon (10.0.0.1:37444). Jan 20 01:37:28.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.144:22-10.0.0.1:37444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:28.501813 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:37:28.501893 kernel: audit: type=1130 audit(1768873048.497:991): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.144:22-10.0.0.1:37444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:28.618000 audit[5762]: USER_ACCT pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.621296 sshd[5762]: Accepted publickey for core from 10.0.0.1 port 37444 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:37:28.628411 sshd-session[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:28.633201 kernel: audit: type=1101 audit(1768873048.618:992): pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.625000 audit[5762]: CRED_ACQ pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.651441 kernel: audit: type=1103 audit(1768873048.625:993): pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.651609 kernel: audit: type=1006 audit(1768873048.625:994): pid=5762 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 20 01:37:28.650729 systemd-logind[1578]: New session 37 of user core. Jan 20 01:37:28.625000 audit[5762]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc20566340 a2=3 a3=0 items=0 ppid=1 pid=5762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:28.664161 kernel: audit: type=1300 audit(1768873048.625:994): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc20566340 a2=3 a3=0 items=0 ppid=1 pid=5762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:28.625000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:28.669331 kernel: audit: type=1327 audit(1768873048.625:994): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:28.674579 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 01:37:28.685000 audit[5762]: USER_START pid=5762 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.692000 audit[5766]: CRED_ACQ pid=5766 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.713426 kernel: audit: type=1105 audit(1768873048.685:995): pid=5762 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.714007 kernel: audit: type=1103 audit(1768873048.692:996): pid=5766 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.853257 sshd[5766]: Connection closed by 10.0.0.1 port 37444 Jan 20 01:37:28.856356 sshd-session[5762]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:28.857000 audit[5762]: USER_END pid=5762 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.873334 kernel: audit: type=1106 audit(1768873048.857:997): pid=5762 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.872539 systemd[1]: sshd@35-10.0.0.144:22-10.0.0.1:37444.service: Deactivated successfully. Jan 20 01:37:28.858000 audit[5762]: CRED_DISP pid=5762 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.879235 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 01:37:28.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.144:22-10.0.0.1:37444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:28.886242 kernel: audit: type=1104 audit(1768873048.858:998): pid=5762 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:28.887805 systemd-logind[1578]: Session 37 logged out. Waiting for processes to exit. Jan 20 01:37:28.892553 systemd-logind[1578]: Removed session 37. Jan 20 01:37:33.702578 kubelet[2780]: E0120 01:37:33.702277 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b9db9c79-llb9v" podUID="49316a51-69bf-4cd8-a713-083d988333bb" Jan 20 01:37:33.895712 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:37:33.896955 kernel: audit: type=1130 audit(1768873053.880:1000): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.144:22-10.0.0.1:59638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:33.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.144:22-10.0.0.1:59638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:33.880980 systemd[1]: Started sshd@36-10.0.0.144:22-10.0.0.1:59638.service - OpenSSH per-connection server daemon (10.0.0.1:59638). Jan 20 01:37:34.000000 audit[5794]: USER_ACCT pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.002453 sshd[5794]: Accepted publickey for core from 10.0.0.1 port 59638 ssh2: RSA SHA256:MffjUK7sXlRezmanFAnKcygaBku2ZTzskgchflAS/TU Jan 20 01:37:34.006384 sshd-session[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:34.000000 audit[5794]: CRED_ACQ pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.025420 systemd-logind[1578]: New session 38 of user core. Jan 20 01:37:34.029686 kernel: audit: type=1101 audit(1768873054.000:1001): pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.029733 kernel: audit: type=1103 audit(1768873054.000:1002): pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.039628 kernel: audit: type=1006 audit(1768873054.000:1003): pid=5794 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 20 01:37:34.000000 audit[5794]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4e1a9c20 a2=3 a3=0 items=0 ppid=1 pid=5794 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:34.056436 kernel: audit: type=1300 audit(1768873054.000:1003): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4e1a9c20 a2=3 a3=0 items=0 ppid=1 pid=5794 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:37:34.000000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:34.057598 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 01:37:34.063869 kernel: audit: type=1327 audit(1768873054.000:1003): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:37:34.069000 audit[5794]: USER_START pid=5794 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.088494 kernel: audit: type=1105 audit(1768873054.069:1004): pid=5794 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.072000 audit[5798]: CRED_ACQ pid=5798 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.105181 kernel: audit: type=1103 audit(1768873054.072:1005): pid=5798 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.280224 sshd[5798]: Connection closed by 10.0.0.1 port 59638 Jan 20 01:37:34.281366 sshd-session[5794]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:34.283000 audit[5794]: USER_END pid=5794 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.293823 systemd-logind[1578]: Session 38 logged out. Waiting for processes to exit. Jan 20 01:37:34.295509 systemd[1]: sshd@36-10.0.0.144:22-10.0.0.1:59638.service: Deactivated successfully. Jan 20 01:37:34.305884 kernel: audit: type=1106 audit(1768873054.283:1006): pid=5794 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.305973 kernel: audit: type=1104 audit(1768873054.283:1007): pid=5794 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.283000 audit[5794]: CRED_DISP pid=5794 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:37:34.307281 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 01:37:34.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.144:22-10.0.0.1:59638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:37:34.319960 systemd-logind[1578]: Removed session 38. Jan 20 01:37:34.699322 kubelet[2780]: E0120 01:37:34.698898 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8dd7d667-ct8ff" podUID="c9a4e181-6c6f-4f81-9d5f-8631eccf6c7d"