Jan 23 18:31:07.334899 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 15:50:57 -00 2026 Jan 23 18:31:07.334928 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:31:07.334938 kernel: BIOS-provided physical RAM map: Jan 23 18:31:07.334945 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:31:07.334951 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:31:07.334987 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:31:07.334997 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:31:07.335004 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:31:07.335010 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:31:07.335017 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:31:07.335023 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e93efff] usable Jan 23 18:31:07.335029 kernel: BIOS-e820: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 23 18:31:07.335035 kernel: BIOS-e820: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 23 18:31:07.335041 kernel: BIOS-e820: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 23 18:31:07.335051 kernel: BIOS-e820: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 23 18:31:07.335058 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 18:31:07.335064 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 18:31:07.335071 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 18:31:07.335077 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 23 18:31:07.335084 kernel: BIOS-e820: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 23 18:31:07.335092 kernel: BIOS-e820: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 23 18:31:07.335098 kernel: BIOS-e820: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 23 18:31:07.335105 kernel: BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 23 18:31:07.335111 kernel: BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 23 18:31:07.335118 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:31:07.335124 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:31:07.335131 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:31:07.335137 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 18:31:07.335150 kernel: NX (Execute Disable) protection: active Jan 23 18:31:07.335156 kernel: APIC: Static calls initialized Jan 23 18:31:07.335560 kernel: e820: update [mem 0x7df7f018-0x7df88a57] usable ==> usable Jan 23 18:31:07.335574 kernel: e820: update [mem 0x7df57018-0x7df7e457] usable ==> usable Jan 23 18:31:07.335581 kernel: extended physical RAM map: Jan 23 18:31:07.335588 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:31:07.335595 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:31:07.335602 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:31:07.335608 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:31:07.335615 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:31:07.335622 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:31:07.335629 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:31:07.335640 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007df57017] usable Jan 23 18:31:07.335647 kernel: reserve setup_data: [mem 0x000000007df57018-0x000000007df7e457] usable Jan 23 18:31:07.335654 kernel: reserve setup_data: [mem 0x000000007df7e458-0x000000007df7f017] usable Jan 23 18:31:07.335661 kernel: reserve setup_data: [mem 0x000000007df7f018-0x000000007df88a57] usable Jan 23 18:31:07.335670 kernel: reserve setup_data: [mem 0x000000007df88a58-0x000000007e93efff] usable Jan 23 18:31:07.335677 kernel: reserve setup_data: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 23 18:31:07.335684 kernel: reserve setup_data: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 23 18:31:07.335691 kernel: reserve setup_data: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 23 18:31:07.335698 kernel: reserve setup_data: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 23 18:31:07.335705 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 18:31:07.335712 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 18:31:07.335719 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 18:31:07.335726 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 23 18:31:07.335733 kernel: reserve setup_data: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 23 18:31:07.335740 kernel: reserve setup_data: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 23 18:31:07.335748 kernel: reserve setup_data: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 23 18:31:07.335755 kernel: reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 23 18:31:07.335762 kernel: reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 23 18:31:07.335769 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:31:07.335776 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:31:07.335784 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:31:07.335791 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 18:31:07.335798 kernel: efi: EFI v2.7 by EDK II Jan 23 18:31:07.335805 kernel: efi: SMBIOS=0x7f972000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7dfd8018 RNG=0x7fb72018 Jan 23 18:31:07.335812 kernel: random: crng init done Jan 23 18:31:07.335819 kernel: efi: Remove mem139: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 18:31:07.335828 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 18:31:07.335835 kernel: secureboot: Secure boot disabled Jan 23 18:31:07.335842 kernel: SMBIOS 2.8 present. Jan 23 18:31:07.335849 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 18:31:07.335856 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:31:07.335863 kernel: Hypervisor detected: KVM Jan 23 18:31:07.335870 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 23 18:31:07.335877 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:31:07.335884 kernel: kvm-clock: using sched offset of 5907258188 cycles Jan 23 18:31:07.335891 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:31:07.335901 kernel: tsc: Detected 2294.608 MHz processor Jan 23 18:31:07.335909 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:31:07.335916 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:31:07.335924 kernel: last_pfn = 0x180000 max_arch_pfn = 0x10000000000 Jan 23 18:31:07.335931 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:31:07.335939 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:31:07.335946 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 23 18:31:07.335954 kernel: Using GB pages for direct mapping Jan 23 18:31:07.335976 kernel: ACPI: Early table checksum verification disabled Jan 23 18:31:07.335983 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 23 18:31:07.335991 kernel: ACPI: XSDT 0x000000007FB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jan 23 18:31:07.335999 kernel: ACPI: FACP 0x000000007FB77000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:31:07.336007 kernel: ACPI: DSDT 0x000000007FB78000 00423C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:31:07.336014 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 23 18:31:07.336022 kernel: ACPI: APIC 0x000000007FB76000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:31:07.336031 kernel: ACPI: MCFG 0x000000007FB75000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:31:07.336038 kernel: ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:31:07.336046 kernel: ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 18:31:07.336053 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb77000-0x7fb770f3] Jan 23 18:31:07.336061 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb78000-0x7fb7c23b] Jan 23 18:31:07.336068 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 23 18:31:07.336076 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb76000-0x7fb7607f] Jan 23 18:31:07.336084 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb75000-0x7fb7503b] Jan 23 18:31:07.336092 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027] Jan 23 18:31:07.336099 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037] Jan 23 18:31:07.336106 kernel: No NUMA configuration found Jan 23 18:31:07.336114 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 23 18:31:07.336121 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jan 23 18:31:07.336129 kernel: Zone ranges: Jan 23 18:31:07.336136 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:31:07.336146 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:31:07.336153 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 23 18:31:07.336160 kernel: Device empty Jan 23 18:31:07.336168 kernel: Movable zone start for each node Jan 23 18:31:07.336175 kernel: Early memory node ranges Jan 23 18:31:07.336183 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:31:07.336190 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 18:31:07.336199 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 18:31:07.336206 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 18:31:07.336214 kernel: node 0: [mem 0x0000000000900000-0x000000007e93efff] Jan 23 18:31:07.336220 kernel: node 0: [mem 0x000000007ea00000-0x000000007ec70fff] Jan 23 18:31:07.336228 kernel: node 0: [mem 0x000000007ed85000-0x000000007f8ecfff] Jan 23 18:31:07.336242 kernel: node 0: [mem 0x000000007fbff000-0x000000007feaefff] Jan 23 18:31:07.336251 kernel: node 0: [mem 0x000000007feb5000-0x000000007feebfff] Jan 23 18:31:07.336259 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 23 18:31:07.336267 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 23 18:31:07.336275 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:31:07.336285 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:31:07.336293 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 18:31:07.336301 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:31:07.336309 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 18:31:07.336318 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 18:31:07.336326 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Jan 23 18:31:07.336334 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 18:31:07.336342 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 18:31:07.336350 kernel: On node 0, zone Normal: 276 pages in unavailable ranges Jan 23 18:31:07.336359 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:31:07.336367 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:31:07.336375 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:31:07.342045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:31:07.342058 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:31:07.342067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:31:07.342076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:31:07.342084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:31:07.342092 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:31:07.342100 kernel: TSC deadline timer available Jan 23 18:31:07.342112 kernel: CPU topo: Max. logical packages: 2 Jan 23 18:31:07.342120 kernel: CPU topo: Max. logical dies: 2 Jan 23 18:31:07.342128 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:31:07.342136 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:31:07.342144 kernel: CPU topo: Num. cores per package: 1 Jan 23 18:31:07.342153 kernel: CPU topo: Num. threads per package: 1 Jan 23 18:31:07.342160 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:31:07.342171 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:31:07.342179 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 18:31:07.342188 kernel: kvm-guest: setup PV sched yield Jan 23 18:31:07.342196 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 23 18:31:07.342204 kernel: Booting paravirtualized kernel on KVM Jan 23 18:31:07.342212 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:31:07.342221 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:31:07.342229 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:31:07.342239 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:31:07.342247 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:31:07.342255 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:31:07.342263 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:31:07.342273 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:31:07.342282 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:31:07.342292 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:31:07.342300 kernel: Fallback order for Node 0: 0 Jan 23 18:31:07.342308 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1046694 Jan 23 18:31:07.342317 kernel: Policy zone: Normal Jan 23 18:31:07.342325 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:31:07.342333 kernel: software IO TLB: area num 2. Jan 23 18:31:07.342342 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:31:07.342352 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:31:07.342360 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:31:07.342369 kernel: Dynamic Preempt: voluntary Jan 23 18:31:07.342377 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:31:07.342386 kernel: rcu: RCU event tracing is enabled. Jan 23 18:31:07.342394 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:31:07.342402 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:31:07.342411 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:31:07.342420 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:31:07.342428 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:31:07.342436 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:31:07.342444 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:31:07.342453 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:31:07.342461 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:31:07.342469 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 18:31:07.342480 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:31:07.342488 kernel: Console: colour dummy device 80x25 Jan 23 18:31:07.342496 kernel: printk: legacy console [tty0] enabled Jan 23 18:31:07.342504 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:31:07.342512 kernel: ACPI: Core revision 20240827 Jan 23 18:31:07.342521 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:31:07.342529 kernel: x2apic enabled Jan 23 18:31:07.342537 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:31:07.342547 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 18:31:07.342555 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 18:31:07.342563 kernel: kvm-guest: setup PV IPIs Jan 23 18:31:07.342572 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 23 18:31:07.342580 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 23 18:31:07.342588 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:31:07.342598 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 18:31:07.342606 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 18:31:07.342613 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:31:07.342621 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 23 18:31:07.342628 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 23 18:31:07.342636 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 23 18:31:07.342644 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 18:31:07.342651 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 18:31:07.342659 kernel: TAA: Mitigation: Clear CPU buffers Jan 23 18:31:07.342666 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 23 18:31:07.342674 kernel: active return thunk: its_return_thunk Jan 23 18:31:07.342683 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 18:31:07.342691 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:31:07.342699 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:31:07.342706 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:31:07.342714 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 18:31:07.342722 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 18:31:07.342729 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 18:31:07.342737 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 18:31:07.342744 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:31:07.342754 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 18:31:07.342761 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 18:31:07.342769 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 18:31:07.342776 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 23 18:31:07.342784 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 23 18:31:07.342792 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:31:07.342800 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:31:07.342807 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:31:07.342815 kernel: landlock: Up and running. Jan 23 18:31:07.342822 kernel: SELinux: Initializing. Jan 23 18:31:07.342830 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:31:07.342837 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:31:07.342847 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Jan 23 18:31:07.342855 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Jan 23 18:31:07.342863 kernel: ... version: 2 Jan 23 18:31:07.342871 kernel: ... bit width: 48 Jan 23 18:31:07.342879 kernel: ... generic registers: 8 Jan 23 18:31:07.342887 kernel: ... value mask: 0000ffffffffffff Jan 23 18:31:07.342895 kernel: ... max period: 00007fffffffffff Jan 23 18:31:07.342905 kernel: ... fixed-purpose events: 3 Jan 23 18:31:07.342913 kernel: ... event mask: 00000007000000ff Jan 23 18:31:07.342921 kernel: signal: max sigframe size: 3632 Jan 23 18:31:07.342929 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:31:07.342937 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:31:07.342946 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:31:07.344379 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:31:07.344397 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:31:07.344409 kernel: .... node #0, CPUs: #1 Jan 23 18:31:07.344417 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:31:07.344426 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 23 18:31:07.344435 kernel: Memory: 3969764K/4186776K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 212136K reserved, 0K cma-reserved) Jan 23 18:31:07.344443 kernel: devtmpfs: initialized Jan 23 18:31:07.344451 kernel: x86/mm: Memory block size: 128MB Jan 23 18:31:07.344459 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 18:31:07.344470 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 18:31:07.344478 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 18:31:07.344486 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 23 18:31:07.344495 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feb3000-0x7feb4fff] (8192 bytes) Jan 23 18:31:07.344503 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes) Jan 23 18:31:07.344511 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:31:07.344519 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:31:07.344529 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:31:07.344537 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:31:07.344546 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:31:07.344554 kernel: audit: type=2000 audit(1769193064.471:1): state=initialized audit_enabled=0 res=1 Jan 23 18:31:07.344562 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:31:07.344570 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:31:07.344578 kernel: cpuidle: using governor menu Jan 23 18:31:07.344588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:31:07.344597 kernel: dca service started, version 1.12.1 Jan 23 18:31:07.344610 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 18:31:07.344627 kernel: PCI: Using configuration type 1 for base access Jan 23 18:31:07.344643 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:31:07.344660 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:31:07.344677 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:31:07.344698 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:31:07.344715 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:31:07.344732 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:31:07.344750 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:31:07.344765 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:31:07.344783 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:31:07.344800 kernel: ACPI: Interpreter enabled Jan 23 18:31:07.344820 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:31:07.344837 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:31:07.344854 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:31:07.344869 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:31:07.344887 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:31:07.344904 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:31:07.345117 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:31:07.346324 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:31:07.346436 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:31:07.346448 kernel: PCI host bridge to bus 0000:00 Jan 23 18:31:07.346552 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:31:07.346645 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:31:07.346739 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:31:07.346829 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 23 18:31:07.346919 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 18:31:07.347824 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Jan 23 18:31:07.347933 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:31:07.348069 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:31:07.348186 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:31:07.348285 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 23 18:31:07.348386 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Jan 23 18:31:07.348483 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Jan 23 18:31:07.348578 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 18:31:07.348674 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:31:07.348780 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.348878 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Jan 23 18:31:07.350179 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 18:31:07.350297 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 23 18:31:07.350405 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 23 18:31:07.350514 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:31:07.350628 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.350728 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Jan 23 18:31:07.350824 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 18:31:07.350920 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 23 18:31:07.351029 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 18:31:07.351137 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.351235 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Jan 23 18:31:07.351329 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 18:31:07.351430 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 23 18:31:07.351525 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 18:31:07.351626 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.351726 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Jan 23 18:31:07.351822 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 18:31:07.351919 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 23 18:31:07.352031 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 18:31:07.352134 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.352233 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Jan 23 18:31:07.352341 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 18:31:07.352437 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 23 18:31:07.352533 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 18:31:07.352634 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.352730 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Jan 23 18:31:07.352829 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 18:31:07.352924 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 23 18:31:07.353029 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 18:31:07.353130 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.353226 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Jan 23 18:31:07.353326 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 18:31:07.353420 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 23 18:31:07.353524 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 18:31:07.353624 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.353719 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Jan 23 18:31:07.353814 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 18:31:07.353912 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 23 18:31:07.354020 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 18:31:07.354125 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.354222 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Jan 23 18:31:07.354317 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 18:31:07.354413 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 23 18:31:07.354510 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 18:31:07.354615 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.354764 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Jan 23 18:31:07.354903 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 18:31:07.355027 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 23 18:31:07.355124 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 18:31:07.355227 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.355322 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Jan 23 18:31:07.355416 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 18:31:07.355511 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 23 18:31:07.355606 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 18:31:07.355706 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.355804 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Jan 23 18:31:07.355910 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 18:31:07.356016 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 23 18:31:07.356112 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 18:31:07.356217 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.356312 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Jan 23 18:31:07.356406 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 18:31:07.356500 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 23 18:31:07.356595 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 18:31:07.356693 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.356798 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Jan 23 18:31:07.356893 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 18:31:07.356995 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 23 18:31:07.357091 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 18:31:07.357192 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.357289 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Jan 23 18:31:07.357383 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 18:31:07.359074 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 23 18:31:07.359195 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 18:31:07.359302 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.359402 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Jan 23 18:31:07.359507 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 18:31:07.359603 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 23 18:31:07.359699 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 18:31:07.359803 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.359902 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Jan 23 18:31:07.360008 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 18:31:07.360108 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 23 18:31:07.360203 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 18:31:07.360305 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.360402 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Jan 23 18:31:07.360498 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 18:31:07.360593 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 23 18:31:07.360691 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 18:31:07.360794 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.360890 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Jan 23 18:31:07.360997 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 18:31:07.361094 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 23 18:31:07.361190 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 18:31:07.361295 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.361391 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Jan 23 18:31:07.361485 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 18:31:07.361597 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 23 18:31:07.361693 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 18:31:07.361795 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.361895 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Jan 23 18:31:07.362003 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 18:31:07.362100 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 23 18:31:07.362196 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 18:31:07.362296 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.362392 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Jan 23 18:31:07.362490 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 18:31:07.362584 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 23 18:31:07.362680 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 18:31:07.362781 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.362880 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Jan 23 18:31:07.365020 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 18:31:07.365144 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 23 18:31:07.365244 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 18:31:07.365345 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.365448 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Jan 23 18:31:07.365562 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 18:31:07.365659 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 23 18:31:07.365754 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 18:31:07.365856 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.366144 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Jan 23 18:31:07.366258 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 18:31:07.366354 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 23 18:31:07.366451 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 18:31:07.366554 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.366651 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Jan 23 18:31:07.366746 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 18:31:07.366844 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 23 18:31:07.366940 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 18:31:07.367716 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.367821 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Jan 23 18:31:07.367918 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 18:31:07.368028 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 18:31:07.368128 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 18:31:07.368233 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.368330 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Jan 23 18:31:07.368425 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 18:31:07.368521 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 18:31:07.368631 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 18:31:07.368737 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:31:07.368833 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Jan 23 18:31:07.368929 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 18:31:07.369043 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 23 18:31:07.369143 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 18:31:07.369249 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:31:07.369360 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:31:07.369466 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:31:07.369583 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Jan 23 18:31:07.369683 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Jan 23 18:31:07.369786 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:31:07.369884 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Jan 23 18:31:07.371462 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 23 18:31:07.371585 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Jan 23 18:31:07.371685 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 18:31:07.371783 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 23 18:31:07.371881 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 23 18:31:07.372002 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:31:07.372101 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 18:31:07.372209 kernel: pci_bus 0000:02: extended config space not accessible Jan 23 18:31:07.372222 kernel: acpiphp: Slot [1] registered Jan 23 18:31:07.372231 kernel: acpiphp: Slot [0] registered Jan 23 18:31:07.372240 kernel: acpiphp: Slot [2] registered Jan 23 18:31:07.372251 kernel: acpiphp: Slot [3] registered Jan 23 18:31:07.372260 kernel: acpiphp: Slot [4] registered Jan 23 18:31:07.372268 kernel: acpiphp: Slot [5] registered Jan 23 18:31:07.372276 kernel: acpiphp: Slot [6] registered Jan 23 18:31:07.372285 kernel: acpiphp: Slot [7] registered Jan 23 18:31:07.372294 kernel: acpiphp: Slot [8] registered Jan 23 18:31:07.372302 kernel: acpiphp: Slot [9] registered Jan 23 18:31:07.372311 kernel: acpiphp: Slot [10] registered Jan 23 18:31:07.372321 kernel: acpiphp: Slot [11] registered Jan 23 18:31:07.372330 kernel: acpiphp: Slot [12] registered Jan 23 18:31:07.372338 kernel: acpiphp: Slot [13] registered Jan 23 18:31:07.372347 kernel: acpiphp: Slot [14] registered Jan 23 18:31:07.372355 kernel: acpiphp: Slot [15] registered Jan 23 18:31:07.372364 kernel: acpiphp: Slot [16] registered Jan 23 18:31:07.372372 kernel: acpiphp: Slot [17] registered Jan 23 18:31:07.372382 kernel: acpiphp: Slot [18] registered Jan 23 18:31:07.372391 kernel: acpiphp: Slot [19] registered Jan 23 18:31:07.372399 kernel: acpiphp: Slot [20] registered Jan 23 18:31:07.372407 kernel: acpiphp: Slot [21] registered Jan 23 18:31:07.372415 kernel: acpiphp: Slot [22] registered Jan 23 18:31:07.372424 kernel: acpiphp: Slot [23] registered Jan 23 18:31:07.372432 kernel: acpiphp: Slot [24] registered Jan 23 18:31:07.372441 kernel: acpiphp: Slot [25] registered Jan 23 18:31:07.372451 kernel: acpiphp: Slot [26] registered Jan 23 18:31:07.372460 kernel: acpiphp: Slot [27] registered Jan 23 18:31:07.372468 kernel: acpiphp: Slot [28] registered Jan 23 18:31:07.372477 kernel: acpiphp: Slot [29] registered Jan 23 18:31:07.372485 kernel: acpiphp: Slot [30] registered Jan 23 18:31:07.372494 kernel: acpiphp: Slot [31] registered Jan 23 18:31:07.372602 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 23 18:31:07.372710 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Jan 23 18:31:07.372810 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 18:31:07.372820 kernel: acpiphp: Slot [0-2] registered Jan 23 18:31:07.372927 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 18:31:07.374079 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Jan 23 18:31:07.374192 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Jan 23 18:31:07.374298 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 18:31:07.374399 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 18:31:07.374410 kernel: acpiphp: Slot [0-3] registered Jan 23 18:31:07.374516 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 23 18:31:07.374618 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Jan 23 18:31:07.374718 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Jan 23 18:31:07.374819 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 18:31:07.374831 kernel: acpiphp: Slot [0-4] registered Jan 23 18:31:07.374937 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 18:31:07.375383 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Jan 23 18:31:07.375488 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 18:31:07.375503 kernel: acpiphp: Slot [0-5] registered Jan 23 18:31:07.375608 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 18:31:07.375711 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Jan 23 18:31:07.376159 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Jan 23 18:31:07.376261 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 18:31:07.376273 kernel: acpiphp: Slot [0-6] registered Jan 23 18:31:07.376370 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 18:31:07.376385 kernel: acpiphp: Slot [0-7] registered Jan 23 18:31:07.376481 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 18:31:07.376493 kernel: acpiphp: Slot [0-8] registered Jan 23 18:31:07.376588 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 18:31:07.376600 kernel: acpiphp: Slot [0-9] registered Jan 23 18:31:07.376698 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 18:31:07.376711 kernel: acpiphp: Slot [0-10] registered Jan 23 18:31:07.376808 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 18:31:07.376819 kernel: acpiphp: Slot [0-11] registered Jan 23 18:31:07.376915 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 18:31:07.376926 kernel: acpiphp: Slot [0-12] registered Jan 23 18:31:07.377031 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 18:31:07.377045 kernel: acpiphp: Slot [0-13] registered Jan 23 18:31:07.377141 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 18:31:07.377152 kernel: acpiphp: Slot [0-14] registered Jan 23 18:31:07.377247 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 18:31:07.377258 kernel: acpiphp: Slot [0-15] registered Jan 23 18:31:07.377352 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 18:31:07.377363 kernel: acpiphp: Slot [0-16] registered Jan 23 18:31:07.377461 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 18:31:07.377472 kernel: acpiphp: Slot [0-17] registered Jan 23 18:31:07.377578 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 18:31:07.377590 kernel: acpiphp: Slot [0-18] registered Jan 23 18:31:07.377685 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 18:31:07.377696 kernel: acpiphp: Slot [0-19] registered Jan 23 18:31:07.377795 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 18:31:07.377806 kernel: acpiphp: Slot [0-20] registered Jan 23 18:31:07.377902 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 18:31:07.377913 kernel: acpiphp: Slot [0-21] registered Jan 23 18:31:07.378266 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 18:31:07.378282 kernel: acpiphp: Slot [0-22] registered Jan 23 18:31:07.378379 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 18:31:07.378394 kernel: acpiphp: Slot [0-23] registered Jan 23 18:31:07.378490 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 18:31:07.378501 kernel: acpiphp: Slot [0-24] registered Jan 23 18:31:07.378597 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 18:31:07.378608 kernel: acpiphp: Slot [0-25] registered Jan 23 18:31:07.378702 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 18:31:07.378715 kernel: acpiphp: Slot [0-26] registered Jan 23 18:31:07.378810 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 18:31:07.378822 kernel: acpiphp: Slot [0-27] registered Jan 23 18:31:07.378916 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 18:31:07.378927 kernel: acpiphp: Slot [0-28] registered Jan 23 18:31:07.379044 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 18:31:07.379058 kernel: acpiphp: Slot [0-29] registered Jan 23 18:31:07.379154 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 18:31:07.379165 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:31:07.379174 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:31:07.379183 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:31:07.379191 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:31:07.379199 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:31:07.379210 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:31:07.379218 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:31:07.379227 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:31:07.379236 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:31:07.379244 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:31:07.379252 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:31:07.379261 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:31:07.379271 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:31:07.379666 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:31:07.379675 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:31:07.379684 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:31:07.379692 kernel: iommu: Default domain type: Translated Jan 23 18:31:07.379701 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:31:07.379709 kernel: efivars: Registered efivars operations Jan 23 18:31:07.379721 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:31:07.379729 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:31:07.379738 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 18:31:07.379746 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 18:31:07.379754 kernel: e820: reserve RAM buffer [mem 0x7df57018-0x7fffffff] Jan 23 18:31:07.379763 kernel: e820: reserve RAM buffer [mem 0x7df7f018-0x7fffffff] Jan 23 18:31:07.379771 kernel: e820: reserve RAM buffer [mem 0x7e93f000-0x7fffffff] Jan 23 18:31:07.379781 kernel: e820: reserve RAM buffer [mem 0x7ec71000-0x7fffffff] Jan 23 18:31:07.379790 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 23 18:31:07.379798 kernel: e820: reserve RAM buffer [mem 0x7feaf000-0x7fffffff] Jan 23 18:31:07.379806 kernel: e820: reserve RAM buffer [mem 0x7feec000-0x7fffffff] Jan 23 18:31:07.379923 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:31:07.380046 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:31:07.380688 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:31:07.380710 kernel: vgaarb: loaded Jan 23 18:31:07.380720 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:31:07.380729 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:31:07.380738 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:31:07.380747 kernel: pnp: PnP ACPI init Jan 23 18:31:07.380865 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 18:31:07.380878 kernel: pnp: PnP ACPI: found 5 devices Jan 23 18:31:07.380889 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:31:07.380897 kernel: NET: Registered PF_INET protocol family Jan 23 18:31:07.380906 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:31:07.380914 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:31:07.380923 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:31:07.380932 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:31:07.380941 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:31:07.380951 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:31:07.380974 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:31:07.380982 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:31:07.380991 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:31:07.380999 kernel: NET: Registered PF_XDP protocol family Jan 23 18:31:07.381106 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 18:31:07.381209 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 18:31:07.381311 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 18:31:07.381418 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 18:31:07.381526 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 18:31:07.381625 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 18:31:07.381725 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 18:31:07.381822 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 18:31:07.381922 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 23 18:31:07.382542 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 23 18:31:07.382649 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 23 18:31:07.382749 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 23 18:31:07.382848 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 23 18:31:07.382946 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 23 18:31:07.383061 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 23 18:31:07.383160 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 23 18:31:07.383260 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 23 18:31:07.384288 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 23 18:31:07.384394 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 23 18:31:07.384495 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 23 18:31:07.384592 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 23 18:31:07.384693 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 23 18:31:07.384789 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 23 18:31:07.384885 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 23 18:31:07.384994 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 23 18:31:07.385097 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 23 18:31:07.385194 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 23 18:31:07.385295 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 23 18:31:07.385391 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 23 18:31:07.385488 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jan 23 18:31:07.385594 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jan 23 18:31:07.385690 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jan 23 18:31:07.385787 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jan 23 18:31:07.385887 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jan 23 18:31:07.385995 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Jan 23 18:31:07.386091 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Jan 23 18:31:07.386191 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Jan 23 18:31:07.386288 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Jan 23 18:31:07.386384 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Jan 23 18:31:07.386479 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Jan 23 18:31:07.386578 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Jan 23 18:31:07.386674 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Jan 23 18:31:07.386769 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.386865 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.387386 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.387512 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.387618 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.387716 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.387812 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.387908 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.388015 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.388111 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.388206 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.388305 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.388403 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.388499 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.388594 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.388689 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.388784 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.388882 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.388999 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.389095 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.389189 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.389285 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.389379 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.389497 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.389839 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.389944 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.390054 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.390150 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.390246 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.390342 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.390441 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Jan 23 18:31:07.390536 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Jan 23 18:31:07.390632 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 18:31:07.390728 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Jan 23 18:31:07.390822 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Jan 23 18:31:07.390918 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 18:31:07.391679 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Jan 23 18:31:07.391782 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Jan 23 18:31:07.391878 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Jan 23 18:31:07.391988 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Jan 23 18:31:07.392084 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Jan 23 18:31:07.392180 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Jan 23 18:31:07.392275 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Jan 23 18:31:07.392375 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.392470 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.392565 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.392660 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.392755 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.392850 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.392945 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.393054 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.393149 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.393243 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.393339 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.393434 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.393557 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.393656 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.393751 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.393846 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.393941 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.394044 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.394140 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.394237 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.394333 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.394428 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.394523 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.394618 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.394713 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.394808 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.394906 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.395009 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.395105 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:31:07.395199 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:31:07.395300 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 18:31:07.395398 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 23 18:31:07.395498 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 23 18:31:07.395596 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:31:07.395691 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 18:31:07.395787 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 23 18:31:07.395882 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 23 18:31:07.395986 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:31:07.399231 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Jan 23 18:31:07.399346 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 18:31:07.399443 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 23 18:31:07.399539 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 18:31:07.399635 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 18:31:07.399730 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 23 18:31:07.399825 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 18:31:07.399921 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 18:31:07.400039 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 23 18:31:07.400138 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 18:31:07.400233 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 18:31:07.400328 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 23 18:31:07.400423 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 18:31:07.400518 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 18:31:07.400613 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 23 18:31:07.400707 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 18:31:07.400807 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 18:31:07.400901 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 23 18:31:07.401012 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 18:31:07.401109 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 18:31:07.401205 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 23 18:31:07.402982 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 18:31:07.403120 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 18:31:07.403223 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 23 18:31:07.403320 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 18:31:07.403419 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 18:31:07.403515 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 23 18:31:07.403610 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 18:31:07.403706 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 18:31:07.403804 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 23 18:31:07.403898 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 18:31:07.404005 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 18:31:07.404100 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 23 18:31:07.404195 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 18:31:07.404290 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 18:31:07.404384 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 23 18:31:07.404482 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 18:31:07.404581 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 18:31:07.404674 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 23 18:31:07.404769 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 18:31:07.404864 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 18:31:07.404969 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 23 18:31:07.405066 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 18:31:07.405161 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 18:31:07.405259 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 23 18:31:07.405354 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 18:31:07.405450 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 18:31:07.405556 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Jan 23 18:31:07.405652 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 23 18:31:07.405750 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 18:31:07.405849 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 18:31:07.405945 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Jan 23 18:31:07.407611 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 23 18:31:07.407718 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 18:31:07.407818 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 18:31:07.407914 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Jan 23 18:31:07.408029 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 23 18:31:07.408125 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 18:31:07.408224 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 18:31:07.408319 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Jan 23 18:31:07.408416 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 23 18:31:07.408510 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 18:31:07.408609 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 18:31:07.408712 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Jan 23 18:31:07.408813 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 23 18:31:07.408908 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 18:31:07.409018 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 18:31:07.409114 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Jan 23 18:31:07.409212 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 23 18:31:07.409307 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 18:31:07.409405 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 18:31:07.409500 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Jan 23 18:31:07.409607 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 23 18:31:07.409701 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 18:31:07.409801 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 18:31:07.409896 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Jan 23 18:31:07.410331 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 23 18:31:07.410434 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 18:31:07.410533 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 18:31:07.410628 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Jan 23 18:31:07.410727 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 23 18:31:07.410822 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 18:31:07.410924 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 18:31:07.411033 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Jan 23 18:31:07.411128 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 23 18:31:07.411223 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 18:31:07.411322 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 18:31:07.411416 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Jan 23 18:31:07.411511 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 18:31:07.411605 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 18:31:07.411702 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 18:31:07.411796 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Jan 23 18:31:07.411891 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 18:31:07.412010 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 18:31:07.412108 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 18:31:07.412203 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Jan 23 18:31:07.412298 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 23 18:31:07.412395 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 18:31:07.412491 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:31:07.412580 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:31:07.412667 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:31:07.412753 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 23 18:31:07.412839 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 18:31:07.412925 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Jan 23 18:31:07.413034 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Jan 23 18:31:07.413128 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Jan 23 18:31:07.413216 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:31:07.413313 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Jan 23 18:31:07.413406 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Jan 23 18:31:07.413498 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:31:07.413606 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Jan 23 18:31:07.413696 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 18:31:07.413934 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Jan 23 18:31:07.414040 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 18:31:07.414137 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Jan 23 18:31:07.414228 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 18:31:07.414326 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Jan 23 18:31:07.414416 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 18:31:07.414512 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Jan 23 18:31:07.414601 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 18:31:07.414696 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Jan 23 18:31:07.414788 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 18:31:07.414886 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Jan 23 18:31:07.414985 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 18:31:07.415080 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Jan 23 18:31:07.415169 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 18:31:07.415265 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Jan 23 18:31:07.415355 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 18:31:07.415489 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Jan 23 18:31:07.415582 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 18:31:07.415681 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Jan 23 18:31:07.415774 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 18:31:07.415868 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Jan 23 18:31:07.415990 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 18:31:07.416087 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Jan 23 18:31:07.416177 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 18:31:07.416274 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Jan 23 18:31:07.416364 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 18:31:07.416461 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Jan 23 18:31:07.416550 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 18:31:07.416645 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Jan 23 18:31:07.416736 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Jan 23 18:31:07.416825 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 18:31:07.416919 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Jan 23 18:31:07.417018 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Jan 23 18:31:07.417107 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 18:31:07.417200 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Jan 23 18:31:07.417292 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Jan 23 18:31:07.417381 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 18:31:07.417476 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Jan 23 18:31:07.417582 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Jan 23 18:31:07.417671 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 18:31:07.417769 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Jan 23 18:31:07.417858 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Jan 23 18:31:07.417947 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 18:31:07.418051 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Jan 23 18:31:07.418141 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Jan 23 18:31:07.418230 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 18:31:07.418326 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Jan 23 18:31:07.418415 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Jan 23 18:31:07.418505 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 18:31:07.418598 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Jan 23 18:31:07.418688 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Jan 23 18:31:07.418779 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 18:31:07.418872 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Jan 23 18:31:07.418969 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Jan 23 18:31:07.419058 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 18:31:07.419155 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Jan 23 18:31:07.419245 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Jan 23 18:31:07.419337 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 18:31:07.419430 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Jan 23 18:31:07.419520 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Jan 23 18:31:07.419609 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 18:31:07.419703 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Jan 23 18:31:07.419793 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Jan 23 18:31:07.419884 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 18:31:07.421463 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Jan 23 18:31:07.421592 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Jan 23 18:31:07.421685 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 18:31:07.421697 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:31:07.421707 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:31:07.421719 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:31:07.421728 kernel: software IO TLB: mapped [mem 0x0000000077ede000-0x000000007bede000] (64MB) Jan 23 18:31:07.421736 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 18:31:07.421745 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 23 18:31:07.421754 kernel: Initialise system trusted keyrings Jan 23 18:31:07.421763 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:31:07.421776 kernel: Key type asymmetric registered Jan 23 18:31:07.421784 kernel: Asymmetric key parser 'x509' registered Jan 23 18:31:07.421793 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:31:07.421801 kernel: io scheduler mq-deadline registered Jan 23 18:31:07.421810 kernel: io scheduler kyber registered Jan 23 18:31:07.421818 kernel: io scheduler bfq registered Jan 23 18:31:07.421922 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 18:31:07.423108 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 18:31:07.423224 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 18:31:07.423325 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 18:31:07.423423 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 18:31:07.423521 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 18:31:07.423618 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 18:31:07.423717 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 18:31:07.423814 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 18:31:07.423910 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 18:31:07.424016 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 18:31:07.424116 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 18:31:07.424212 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 18:31:07.424310 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 18:31:07.424406 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 18:31:07.424503 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 18:31:07.424516 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:31:07.424613 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 23 18:31:07.424710 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 23 18:31:07.424808 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Jan 23 18:31:07.424903 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Jan 23 18:31:07.425012 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Jan 23 18:31:07.425110 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Jan 23 18:31:07.425210 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Jan 23 18:31:07.425305 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Jan 23 18:31:07.425402 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Jan 23 18:31:07.425499 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Jan 23 18:31:07.425608 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Jan 23 18:31:07.425704 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Jan 23 18:31:07.425801 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Jan 23 18:31:07.425896 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Jan 23 18:31:07.426612 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Jan 23 18:31:07.426731 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Jan 23 18:31:07.426744 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 18:31:07.427392 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Jan 23 18:31:07.427505 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Jan 23 18:31:07.427603 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Jan 23 18:31:07.427701 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Jan 23 18:31:07.427802 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Jan 23 18:31:07.427898 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Jan 23 18:31:07.428017 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Jan 23 18:31:07.428114 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Jan 23 18:31:07.428213 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Jan 23 18:31:07.428309 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Jan 23 18:31:07.428405 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Jan 23 18:31:07.428503 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Jan 23 18:31:07.428601 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Jan 23 18:31:07.428700 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Jan 23 18:31:07.428797 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Jan 23 18:31:07.428892 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Jan 23 18:31:07.428904 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 23 18:31:07.430031 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Jan 23 18:31:07.430143 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Jan 23 18:31:07.430245 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Jan 23 18:31:07.430342 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Jan 23 18:31:07.430440 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Jan 23 18:31:07.430536 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Jan 23 18:31:07.430632 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Jan 23 18:31:07.430730 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Jan 23 18:31:07.430827 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Jan 23 18:31:07.430922 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Jan 23 18:31:07.430933 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:31:07.430942 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:31:07.430951 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:31:07.430968 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:31:07.430979 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:31:07.430988 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:31:07.430997 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:31:07.431104 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 18:31:07.431198 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 18:31:07.431289 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T18:31:05 UTC (1769193065) Jan 23 18:31:07.431382 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 23 18:31:07.431392 kernel: intel_pstate: CPU model not supported Jan 23 18:31:07.431402 kernel: efifb: probing for efifb Jan 23 18:31:07.431410 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 23 18:31:07.431419 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 18:31:07.431427 kernel: efifb: scrolling: redraw Jan 23 18:31:07.431436 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:31:07.431446 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:31:07.431455 kernel: fb0: EFI VGA frame buffer device Jan 23 18:31:07.431463 kernel: pstore: Using crash dump compression: deflate Jan 23 18:31:07.431472 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:31:07.431482 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:31:07.431490 kernel: Segment Routing with IPv6 Jan 23 18:31:07.431498 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:31:07.431507 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:31:07.431517 kernel: Key type dns_resolver registered Jan 23 18:31:07.431525 kernel: IPI shorthand broadcast: enabled Jan 23 18:31:07.431534 kernel: sched_clock: Marking stable (2648001256, 153269415)->(2902871446, -101600775) Jan 23 18:31:07.431542 kernel: registered taskstats version 1 Jan 23 18:31:07.431551 kernel: Loading compiled-in X.509 certificates Jan 23 18:31:07.431559 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed4528912f8413ae803010e63385bcf7ed197cf1' Jan 23 18:31:07.431568 kernel: Demotion targets for Node 0: null Jan 23 18:31:07.431578 kernel: Key type .fscrypt registered Jan 23 18:31:07.431586 kernel: Key type fscrypt-provisioning registered Jan 23 18:31:07.431595 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:31:07.431603 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:31:07.431611 kernel: ima: No architecture policies found Jan 23 18:31:07.431620 kernel: clk: Disabling unused clocks Jan 23 18:31:07.431628 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 23 18:31:07.431639 kernel: Write protecting the kernel read-only data: 47104k Jan 23 18:31:07.431647 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 23 18:31:07.431656 kernel: Run /init as init process Jan 23 18:31:07.431664 kernel: with arguments: Jan 23 18:31:07.431673 kernel: /init Jan 23 18:31:07.431682 kernel: with environment: Jan 23 18:31:07.431690 kernel: HOME=/ Jan 23 18:31:07.431698 kernel: TERM=linux Jan 23 18:31:07.431708 kernel: SCSI subsystem initialized Jan 23 18:31:07.431717 kernel: libata version 3.00 loaded. Jan 23 18:31:07.431817 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:31:07.431829 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:31:07.431930 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:31:07.432234 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:31:07.432340 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:31:07.432451 kernel: scsi host0: ahci Jan 23 18:31:07.432555 kernel: scsi host1: ahci Jan 23 18:31:07.432674 kernel: scsi host2: ahci Jan 23 18:31:07.432831 kernel: scsi host3: ahci Jan 23 18:31:07.433022 kernel: scsi host4: ahci Jan 23 18:31:07.433142 kernel: scsi host5: ahci Jan 23 18:31:07.433155 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 55 lpm-pol 1 Jan 23 18:31:07.433165 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 55 lpm-pol 1 Jan 23 18:31:07.433174 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 55 lpm-pol 1 Jan 23 18:31:07.433183 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 55 lpm-pol 1 Jan 23 18:31:07.433191 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 55 lpm-pol 1 Jan 23 18:31:07.433203 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 55 lpm-pol 1 Jan 23 18:31:07.433212 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:31:07.433221 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:31:07.433233 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:31:07.433242 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 18:31:07.433252 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:31:07.433261 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 18:31:07.433272 kernel: ACPI: bus type USB registered Jan 23 18:31:07.433281 kernel: usbcore: registered new interface driver usbfs Jan 23 18:31:07.433289 kernel: usbcore: registered new interface driver hub Jan 23 18:31:07.433298 kernel: usbcore: registered new device driver usb Jan 23 18:31:07.433410 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Jan 23 18:31:07.433536 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Jan 23 18:31:07.433645 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Jan 23 18:31:07.433752 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Jan 23 18:31:07.433882 kernel: hub 1-0:1.0: USB hub found Jan 23 18:31:07.434005 kernel: hub 1-0:1.0: 2 ports detected Jan 23 18:31:07.434121 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 23 18:31:07.434221 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 23 18:31:07.434235 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:31:07.434245 kernel: GPT:25804799 != 104857599 Jan 23 18:31:07.434254 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:31:07.434263 kernel: GPT:25804799 != 104857599 Jan 23 18:31:07.434271 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:31:07.434280 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:31:07.434289 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:31:07.434300 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:31:07.434309 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:31:07.434649 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 23 18:31:07.434658 kernel: raid6: avx512x4 gen() 24700 MB/s Jan 23 18:31:07.434667 kernel: raid6: avx512x2 gen() 33313 MB/s Jan 23 18:31:07.434676 kernel: raid6: avx512x1 gen() 37155 MB/s Jan 23 18:31:07.434685 kernel: raid6: avx2x4 gen() 32597 MB/s Jan 23 18:31:07.434696 kernel: raid6: avx2x2 gen() 33104 MB/s Jan 23 18:31:07.434705 kernel: raid6: avx2x1 gen() 30487 MB/s Jan 23 18:31:07.434713 kernel: raid6: using algorithm avx512x1 gen() 37155 MB/s Jan 23 18:31:07.434723 kernel: raid6: .... xor() 24709 MB/s, rmw enabled Jan 23 18:31:07.434734 kernel: raid6: using avx512x2 recovery algorithm Jan 23 18:31:07.434742 kernel: xor: automatically using best checksumming function avx Jan 23 18:31:07.434890 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Jan 23 18:31:07.434904 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:31:07.434913 kernel: BTRFS: device fsid ae5f9861-c401-42b4-99c9-2e3fe0b343c2 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (203) Jan 23 18:31:07.434922 kernel: BTRFS info (device dm-0): first mount of filesystem ae5f9861-c401-42b4-99c9-2e3fe0b343c2 Jan 23 18:31:07.434931 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:31:07.434940 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:31:07.434948 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:31:07.434974 kernel: loop: module loaded Jan 23 18:31:07.434983 kernel: loop0: detected capacity change from 0 to 100560 Jan 23 18:31:07.434992 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:31:07.435000 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 18:31:07.435010 kernel: usbcore: registered new interface driver usbhid Jan 23 18:31:07.435019 kernel: usbhid: USB HID core driver Jan 23 18:31:07.435028 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 23 18:31:07.435166 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Jan 23 18:31:07.435180 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:31:07.435193 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:31:07.435203 systemd[1]: Detected virtualization kvm. Jan 23 18:31:07.435212 systemd[1]: Detected architecture x86-64. Jan 23 18:31:07.435223 systemd[1]: Running in initrd. Jan 23 18:31:07.435232 systemd[1]: No hostname configured, using default hostname. Jan 23 18:31:07.435242 systemd[1]: Hostname set to . Jan 23 18:31:07.435251 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 18:31:07.435259 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:31:07.435269 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:31:07.435279 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:31:07.435289 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:31:07.435299 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:31:07.435308 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:31:07.435318 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:31:07.435327 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:31:07.435338 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:31:07.435348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:31:07.435357 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:31:07.435366 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:31:07.435375 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:31:07.435384 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:31:07.435393 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:31:07.435404 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:31:07.435414 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:31:07.435423 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:31:07.435432 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:31:07.435441 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:31:07.435451 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:31:07.435462 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:31:07.435471 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:31:07.435480 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:31:07.435489 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:31:07.435498 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:31:07.435507 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:31:07.435516 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:31:07.435528 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:31:07.435537 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:31:07.435546 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:31:07.435555 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:31:07.435565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:31:07.435576 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:31:07.435585 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:31:07.435595 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:31:07.435604 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:31:07.435632 systemd-journald[344]: Collecting audit messages is enabled. Jan 23 18:31:07.435657 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:31:07.435667 kernel: audit: type=1130 audit(1769193067.335:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.435676 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:31:07.435688 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:31:07.435697 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:31:07.435706 kernel: Bridge firewalling registered Jan 23 18:31:07.435715 kernel: audit: type=1130 audit(1769193067.356:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.435724 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:31:07.435734 kernel: audit: type=1130 audit(1769193067.364:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.435745 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:31:07.435754 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:31:07.435764 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:31:07.435773 kernel: audit: type=1130 audit(1769193067.395:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.435782 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:31:07.435792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:31:07.435802 kernel: audit: type=1130 audit(1769193067.409:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.435813 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:31:07.435822 kernel: audit: type=1130 audit(1769193067.415:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.435831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:31:07.435840 kernel: audit: type=1334 audit(1769193067.419:8): prog-id=6 op=LOAD Jan 23 18:31:07.435850 systemd-journald[344]: Journal started Jan 23 18:31:07.435872 systemd-journald[344]: Runtime Journal (/run/log/journal/b0c71806158f401284bf458cb6c45ae2) is 8M, max 77.9M, 69.9M free. Jan 23 18:31:07.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.419000 audit: BPF prog-id=6 op=LOAD Jan 23 18:31:07.356770 systemd-modules-load[347]: Inserted module 'br_netfilter' Jan 23 18:31:07.438510 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:31:07.438935 dracut-cmdline[370]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:31:07.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.447454 kernel: audit: type=1130 audit(1769193067.441:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.456061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:31:07.472039 systemd-tmpfiles[406]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:31:07.478105 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:31:07.484044 kernel: audit: type=1130 audit(1769193067.477:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.491811 systemd-resolved[372]: Positive Trust Anchors: Jan 23 18:31:07.491824 systemd-resolved[372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:31:07.491828 systemd-resolved[372]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:31:07.491859 systemd-resolved[372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:31:07.519674 systemd-resolved[372]: Defaulting to hostname 'linux'. Jan 23 18:31:07.520535 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:31:07.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.521867 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:31:07.551986 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:31:07.568980 kernel: iscsi: registered transport (tcp) Jan 23 18:31:07.594301 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:31:07.594379 kernel: QLogic iSCSI HBA Driver Jan 23 18:31:07.623553 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:31:07.640342 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:31:07.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.641303 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:31:07.693368 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:31:07.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.695408 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:31:07.696459 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:31:07.737265 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:31:07.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.737000 audit: BPF prog-id=7 op=LOAD Jan 23 18:31:07.738000 audit: BPF prog-id=8 op=LOAD Jan 23 18:31:07.739865 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:31:07.767223 systemd-udevd[624]: Using default interface naming scheme 'v257'. Jan 23 18:31:07.776137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:31:07.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.777685 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:31:07.802057 dracut-pre-trigger[685]: rd.md=0: removing MD RAID activation Jan 23 18:31:07.808384 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:31:07.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.809000 audit: BPF prog-id=9 op=LOAD Jan 23 18:31:07.812068 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:31:07.830512 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:31:07.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.833070 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:31:07.856265 systemd-networkd[734]: lo: Link UP Jan 23 18:31:07.856270 systemd-networkd[734]: lo: Gained carrier Jan 23 18:31:07.857769 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:31:07.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.858284 systemd[1]: Reached target network.target - Network. Jan 23 18:31:07.922747 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:31:07.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:07.924310 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:31:08.044865 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 18:31:08.064914 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 18:31:08.074765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:31:08.083984 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 18:31:08.086100 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:31:08.097006 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:31:08.097760 systemd-networkd[734]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:31:08.097767 systemd-networkd[734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:31:08.098231 systemd-networkd[734]: eth0: Link UP Jan 23 18:31:08.100069 systemd-networkd[734]: eth0: Gained carrier Jan 23 18:31:08.100080 systemd-networkd[734]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:31:08.121404 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:31:08.121584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:31:08.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:08.126156 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:31:08.132667 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 23 18:31:08.132698 kernel: audit: type=1131 audit(1769193068.125:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:08.132713 disk-uuid[799]: Primary Header is updated. Jan 23 18:31:08.132713 disk-uuid[799]: Secondary Entries is updated. Jan 23 18:31:08.132713 disk-uuid[799]: Secondary Header is updated. Jan 23 18:31:08.137130 kernel: AES CTR mode by8 optimization enabled Jan 23 18:31:08.137335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:31:08.141019 systemd-networkd[734]: eth0: DHCPv4 address 10.0.6.238/25, gateway 10.0.6.129 acquired from 10.0.6.129 Jan 23 18:31:08.169329 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 23 18:31:08.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:08.184661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:31:08.190538 kernel: audit: type=1130 audit(1769193068.184:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:08.259102 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:31:08.263556 kernel: audit: type=1130 audit(1769193068.258:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:08.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:08.260085 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:31:08.263973 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:31:08.264828 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:31:08.266748 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:31:08.287645 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:31:08.292074 kernel: audit: type=1130 audit(1769193068.287:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:08.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:09.205375 disk-uuid[800]: Warning: The kernel is still using the old partition table. Jan 23 18:31:09.205375 disk-uuid[800]: The new table will be used at the next reboot or after you Jan 23 18:31:09.205375 disk-uuid[800]: run partprobe(8) or kpartx(8) Jan 23 18:31:09.205375 disk-uuid[800]: The operation has completed successfully. Jan 23 18:31:09.217909 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:31:09.242182 kernel: audit: type=1130 audit(1769193069.218:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:09.242251 kernel: audit: type=1131 audit(1769193069.218:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:09.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:09.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:09.218147 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:31:09.223191 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:31:09.308076 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (917) Jan 23 18:31:09.315298 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:31:09.315406 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:31:09.328498 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:31:09.328603 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:31:09.341992 kernel: BTRFS info (device vda6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:31:09.341868 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:31:09.347116 kernel: audit: type=1130 audit(1769193069.341:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:09.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:09.344508 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:31:09.549368 systemd-networkd[734]: eth0: Gained IPv6LL Jan 23 18:31:09.691258 ignition[936]: Ignition 2.24.0 Jan 23 18:31:09.691269 ignition[936]: Stage: fetch-offline Jan 23 18:31:09.691305 ignition[936]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:31:09.691315 ignition[936]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:31:09.694215 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:31:09.691392 ignition[936]: parsed url from cmdline: "" Jan 23 18:31:09.691395 ignition[936]: no config URL provided Jan 23 18:31:09.691400 ignition[936]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:31:09.702727 kernel: audit: type=1130 audit(1769193069.694:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:09.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:09.699345 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:31:09.691408 ignition[936]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:31:09.691413 ignition[936]: failed to fetch config: resource requires networking Jan 23 18:31:09.691561 ignition[936]: Ignition finished successfully Jan 23 18:31:09.723001 ignition[943]: Ignition 2.24.0 Jan 23 18:31:09.723009 ignition[943]: Stage: fetch Jan 23 18:31:09.723148 ignition[943]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:31:09.723155 ignition[943]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:31:09.723235 ignition[943]: parsed url from cmdline: "" Jan 23 18:31:09.723238 ignition[943]: no config URL provided Jan 23 18:31:09.723246 ignition[943]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:31:09.723252 ignition[943]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:31:09.723319 ignition[943]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 18:31:09.723333 ignition[943]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 18:31:09.723353 ignition[943]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 18:31:10.568327 ignition[943]: GET result: OK Jan 23 18:31:10.568542 ignition[943]: parsing config with SHA512: 5456c2a6cbbd1b6cd7987b9ccde95e900aaef0fca1c865bdb871a81c8a60e1a5b9a1c06fac5892ac49b883b8a2bdd2a438fd16e4418196830b549e74b1bb0db5 Jan 23 18:31:10.586994 unknown[943]: fetched base config from "system" Jan 23 18:31:10.588800 unknown[943]: fetched base config from "system" Jan 23 18:31:10.588918 unknown[943]: fetched user config from "openstack" Jan 23 18:31:10.590314 ignition[943]: fetch: fetch complete Jan 23 18:31:10.590329 ignition[943]: fetch: fetch passed Jan 23 18:31:10.590526 ignition[943]: Ignition finished successfully Jan 23 18:31:10.595563 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:31:10.608174 kernel: audit: type=1130 audit(1769193070.596:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:10.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:10.600854 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:31:10.656128 ignition[950]: Ignition 2.24.0 Jan 23 18:31:10.656148 ignition[950]: Stage: kargs Jan 23 18:31:10.656422 ignition[950]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:31:10.656438 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:31:10.658047 ignition[950]: kargs: kargs passed Jan 23 18:31:10.658112 ignition[950]: Ignition finished successfully Jan 23 18:31:10.670822 kernel: audit: type=1130 audit(1769193070.663:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:10.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:10.663076 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:31:10.668174 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:31:10.708261 ignition[956]: Ignition 2.24.0 Jan 23 18:31:10.708277 ignition[956]: Stage: disks Jan 23 18:31:10.708544 ignition[956]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:31:10.708558 ignition[956]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:31:10.711859 ignition[956]: disks: disks passed Jan 23 18:31:10.711917 ignition[956]: Ignition finished successfully Jan 23 18:31:10.715950 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:31:10.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:10.717879 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:31:10.719378 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:31:10.720417 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:31:10.721515 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:31:10.722431 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:31:10.724476 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:31:10.793554 systemd-fsck[965]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 23 18:31:10.797830 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:31:10.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:10.801012 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:31:11.010998 kernel: EXT4-fs (vda9): mounted filesystem eebf2bdd-2461-4b18-9f37-721daf86511d r/w with ordered data mode. Quota mode: none. Jan 23 18:31:11.012387 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:31:11.015606 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:31:11.020540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:31:11.023224 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:31:11.025510 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:31:11.030192 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 18:31:11.032087 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:31:11.032136 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:31:11.054274 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:31:11.071527 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:31:11.096004 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (973) Jan 23 18:31:11.100027 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:31:11.112005 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:31:11.132067 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:31:11.132139 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:31:11.134702 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:31:11.169986 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:11.286397 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:31:11.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:11.288722 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:31:11.290094 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:31:11.305611 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:31:11.308106 kernel: BTRFS info (device vda6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:31:11.336087 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:31:11.337243 ignition[1073]: INFO : Ignition 2.24.0 Jan 23 18:31:11.337243 ignition[1073]: INFO : Stage: mount Jan 23 18:31:11.337243 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:31:11.337243 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:31:11.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:11.339784 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:31:11.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:11.341179 ignition[1073]: INFO : mount: mount passed Jan 23 18:31:11.341179 ignition[1073]: INFO : Ignition finished successfully Jan 23 18:31:12.222976 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:14.233085 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:18.248026 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:18.259374 coreos-metadata[975]: Jan 23 18:31:18.259 WARN failed to locate config-drive, using the metadata service API instead Jan 23 18:31:18.303848 coreos-metadata[975]: Jan 23 18:31:18.303 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 18:31:18.933280 coreos-metadata[975]: Jan 23 18:31:18.933 INFO Fetch successful Jan 23 18:31:18.934540 coreos-metadata[975]: Jan 23 18:31:18.934 INFO wrote hostname ci-4547-1-0-1-5b0cac0ed6 to /sysroot/etc/hostname Jan 23 18:31:18.936349 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 18:31:18.950661 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 23 18:31:18.950706 kernel: audit: type=1130 audit(1769193078.935:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:18.950729 kernel: audit: type=1131 audit(1769193078.935:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:18.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:18.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:18.936534 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 18:31:18.940122 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:31:18.970941 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:31:19.020009 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1090) Jan 23 18:31:19.026775 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:31:19.026880 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:31:19.041039 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:31:19.041127 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:31:19.045554 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:31:19.076745 ignition[1107]: INFO : Ignition 2.24.0 Jan 23 18:31:19.076745 ignition[1107]: INFO : Stage: files Jan 23 18:31:19.078221 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:31:19.078221 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:31:19.078221 ignition[1107]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:31:19.079907 ignition[1107]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:31:19.079907 ignition[1107]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:31:19.085569 ignition[1107]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:31:19.086288 ignition[1107]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:31:19.086844 ignition[1107]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:31:19.086671 unknown[1107]: wrote ssh authorized keys file for user: core Jan 23 18:31:19.089540 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:31:19.090655 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 18:31:19.148360 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:31:19.267492 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:31:19.267492 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:31:19.269251 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:31:19.269251 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:31:19.269251 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:31:19.269251 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:31:19.269251 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:31:19.269251 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:31:19.269251 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:31:19.272469 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:31:19.272469 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:31:19.272469 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:31:19.272469 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:31:19.272469 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:31:19.272469 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 18:31:19.645364 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:31:21.524666 ignition[1107]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:31:21.524666 ignition[1107]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:31:21.529684 ignition[1107]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:31:21.535178 ignition[1107]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:31:21.535178 ignition[1107]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:31:21.535178 ignition[1107]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:31:21.545783 kernel: audit: type=1130 audit(1769193081.538:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.545897 ignition[1107]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:31:21.545897 ignition[1107]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:31:21.545897 ignition[1107]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:31:21.545897 ignition[1107]: INFO : files: files passed Jan 23 18:31:21.545897 ignition[1107]: INFO : Ignition finished successfully Jan 23 18:31:21.538443 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:31:21.541795 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:31:21.551417 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:31:21.560374 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:31:21.561064 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:31:21.575121 kernel: audit: type=1130 audit(1769193081.563:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.575159 kernel: audit: type=1131 audit(1769193081.563:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.583313 initrd-setup-root-after-ignition[1140]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:31:21.583313 initrd-setup-root-after-ignition[1140]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:31:21.585575 initrd-setup-root-after-ignition[1144]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:31:21.587194 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:31:21.593088 kernel: audit: type=1130 audit(1769193081.587:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.588509 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:31:21.595137 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:31:21.646617 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:31:21.646790 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:31:21.657795 kernel: audit: type=1130 audit(1769193081.647:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.657830 kernel: audit: type=1131 audit(1769193081.647:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.648827 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:31:21.658700 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:31:21.660410 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:31:21.661890 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:31:21.690710 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:31:21.695380 kernel: audit: type=1130 audit(1769193081.690:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.694091 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:31:21.713831 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:31:21.713978 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:31:21.715723 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:31:21.717017 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:31:21.718334 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:31:21.723280 kernel: audit: type=1131 audit(1769193081.718:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.718502 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:31:21.723448 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:31:21.724798 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:31:21.726011 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:31:21.727172 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:31:21.728334 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:31:21.729508 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:31:21.730691 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:31:21.731870 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:31:21.732975 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:31:21.734204 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:31:21.735340 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:31:21.736507 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:31:21.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.736744 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:31:21.738230 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:31:21.739549 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:31:21.740542 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:31:21.740694 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:31:21.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.741730 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:31:21.741949 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:31:21.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.743384 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:31:21.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.743540 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:31:21.744673 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:31:21.744815 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:31:21.746990 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:31:21.749050 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:31:21.749208 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:31:21.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.763167 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:31:21.763802 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:31:21.763945 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:31:21.767896 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:31:21.768698 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:31:21.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.770148 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:31:21.770888 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:31:21.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.780655 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:31:21.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.780778 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:31:21.784894 ignition[1164]: INFO : Ignition 2.24.0 Jan 23 18:31:21.784894 ignition[1164]: INFO : Stage: umount Jan 23 18:31:21.787045 ignition[1164]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:31:21.787045 ignition[1164]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:31:21.787045 ignition[1164]: INFO : umount: umount passed Jan 23 18:31:21.787045 ignition[1164]: INFO : Ignition finished successfully Jan 23 18:31:21.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.787254 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:31:21.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.787384 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:31:21.788476 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:31:21.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.788569 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:31:21.789236 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:31:21.789281 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:31:21.789942 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:31:21.790036 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:31:21.790782 systemd[1]: Stopped target network.target - Network. Jan 23 18:31:21.791566 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:31:21.791616 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:31:21.792460 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:31:21.794108 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:31:21.798011 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:31:21.798478 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:31:21.799283 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:31:21.800901 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:31:21.800941 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:31:21.801717 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:31:21.801753 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:31:21.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.803118 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 18:31:21.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.803147 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:31:21.803916 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:31:21.803980 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:31:21.804733 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:31:21.804774 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:31:21.806380 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:31:21.810068 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:31:21.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.811365 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:31:21.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.812042 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:31:21.812138 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:31:21.814283 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:31:21.814368 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:31:21.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.816913 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:31:21.817016 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:31:21.817000 audit: BPF prog-id=9 op=UNLOAD Jan 23 18:31:21.819266 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:31:21.819669 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:31:21.819712 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:31:21.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.820408 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:31:21.820453 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:31:21.822000 audit: BPF prog-id=6 op=UNLOAD Jan 23 18:31:21.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.821706 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:31:21.823037 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:31:21.823451 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:31:21.825027 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:31:21.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.825065 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:31:21.826271 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:31:21.826308 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:31:21.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.827414 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:31:21.836520 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:31:21.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.837143 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:31:21.838367 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:31:21.838408 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:31:21.839601 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:31:21.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.839785 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:31:21.840353 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:31:21.840394 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:31:21.842062 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:31:21.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.842105 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:31:21.842875 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:31:21.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.842913 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:31:21.845455 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:31:21.846216 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:31:21.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.846263 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:31:21.847068 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:31:21.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.847104 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:31:21.848367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:31:21.848403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:31:21.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.865343 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:31:21.865457 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:31:21.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.867865 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:31:21.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:21.867939 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:31:21.869135 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:31:21.870846 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:31:21.894033 systemd[1]: Switching root. Jan 23 18:31:21.953689 systemd-journald[344]: Journal stopped Jan 23 18:31:23.299143 systemd-journald[344]: Received SIGTERM from PID 1 (systemd). Jan 23 18:31:23.299231 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:31:23.299251 kernel: SELinux: policy capability open_perms=1 Jan 23 18:31:23.299267 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:31:23.299278 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:31:23.299289 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:31:23.299303 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:31:23.299317 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:31:23.299332 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:31:23.299346 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:31:23.299359 systemd[1]: Successfully loaded SELinux policy in 74.796ms. Jan 23 18:31:23.299372 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.943ms. Jan 23 18:31:23.299384 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:31:23.299396 systemd[1]: Detected virtualization kvm. Jan 23 18:31:23.299408 systemd[1]: Detected architecture x86-64. Jan 23 18:31:23.299421 systemd[1]: Detected first boot. Jan 23 18:31:23.299436 systemd[1]: Hostname set to . Jan 23 18:31:23.299449 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 18:31:23.299461 zram_generator::config[1207]: No configuration found. Jan 23 18:31:23.299478 kernel: Guest personality initialized and is inactive Jan 23 18:31:23.299489 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:31:23.299501 kernel: Initialized host personality Jan 23 18:31:23.299513 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:31:23.299526 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:31:23.299540 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:31:23.299554 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:31:23.299565 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:31:23.299580 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:31:23.299591 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:31:23.299606 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:31:23.299618 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:31:23.299631 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:31:23.299646 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:31:23.299658 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:31:23.299672 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:31:23.299684 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:31:23.299698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:31:23.299710 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:31:23.299721 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:31:23.299732 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:31:23.299744 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:31:23.299758 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:31:23.299769 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:31:23.299780 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:31:23.299791 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:31:23.299804 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:31:23.299815 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:31:23.299827 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:31:23.299840 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:31:23.299852 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:31:23.299863 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 18:31:23.299875 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:31:23.299886 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:31:23.299898 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:31:23.299910 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:31:23.299922 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:31:23.299933 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:31:23.299945 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 18:31:23.299965 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:31:23.299978 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 18:31:23.299989 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 18:31:23.300001 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:31:23.300015 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:31:23.300026 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:31:23.300084 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:31:23.300097 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:31:23.300109 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:31:23.300120 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:31:23.300132 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:31:23.300145 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:31:23.300156 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:31:23.300168 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:31:23.300179 systemd[1]: Reached target machines.target - Containers. Jan 23 18:31:23.300193 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:31:23.300204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:31:23.300216 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:31:23.300229 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:31:23.300240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:31:23.300251 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:31:23.300265 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:31:23.300275 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:31:23.300287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:31:23.300298 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:31:23.300309 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:31:23.300320 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:31:23.300332 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:31:23.300344 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:31:23.300356 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:31:23.300367 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:31:23.300378 kernel: fuse: init (API version 7.41) Jan 23 18:31:23.300389 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:31:23.300400 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:31:23.300414 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:31:23.300427 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:31:23.300440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:31:23.300452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:31:23.300464 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:31:23.300475 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:31:23.300487 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:31:23.300498 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:31:23.300511 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:31:23.300524 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:31:23.300534 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:31:23.300546 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:31:23.300558 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:31:23.300570 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:31:23.300581 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:31:23.300592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:31:23.300602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:31:23.300613 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:31:23.300625 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:31:23.300638 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:31:23.300649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:31:23.300661 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:31:23.300672 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:31:23.301407 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 18:31:23.301448 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:31:23.301463 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:31:23.301481 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:31:23.301497 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:31:23.301510 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:31:23.301524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:31:23.301539 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:31:23.301552 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:31:23.301566 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:31:23.301582 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:31:23.301595 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:31:23.301608 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:31:23.301622 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:31:23.301635 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:31:23.301647 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:31:23.301658 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:31:23.301671 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:31:23.301682 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:31:23.301693 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:31:23.301705 kernel: loop1: detected capacity change from 0 to 1656 Jan 23 18:31:23.301717 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:31:23.301729 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:31:23.301762 systemd-journald[1278]: Collecting audit messages is enabled. Jan 23 18:31:23.301789 systemd-journald[1278]: Journal started Jan 23 18:31:23.301812 systemd-journald[1278]: Runtime Journal (/run/log/journal/b0c71806158f401284bf458cb6c45ae2) is 8M, max 77.9M, 69.9M free. Jan 23 18:31:23.305699 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:31:23.305742 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:31:23.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.106000 audit: BPF prog-id=14 op=UNLOAD Jan 23 18:31:23.106000 audit: BPF prog-id=13 op=UNLOAD Jan 23 18:31:23.108000 audit: BPF prog-id=15 op=LOAD Jan 23 18:31:23.108000 audit: BPF prog-id=16 op=LOAD Jan 23 18:31:23.108000 audit: BPF prog-id=17 op=LOAD Jan 23 18:31:23.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.294000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 18:31:23.294000 audit[1278]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffcc8c65330 a2=4000 a3=0 items=0 ppid=1 pid=1278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:31:23.294000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 18:31:23.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:22.875107 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:31:22.899968 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 18:31:22.900411 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:31:23.315979 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:31:23.316020 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:31:23.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.324593 kernel: ACPI: bus type drm_connector registered Jan 23 18:31:23.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.318703 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:31:23.322256 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:31:23.322425 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:31:23.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.331064 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:31:23.344984 kernel: loop2: detected capacity change from 0 to 111560 Jan 23 18:31:23.358548 systemd-journald[1278]: Time spent on flushing to /var/log/journal/b0c71806158f401284bf458cb6c45ae2 is 31.173ms for 1847 entries. Jan 23 18:31:23.358548 systemd-journald[1278]: System Journal (/var/log/journal/b0c71806158f401284bf458cb6c45ae2) is 8M, max 588.1M, 580.1M free. Jan 23 18:31:23.398984 systemd-journald[1278]: Received client request to flush runtime journal. Jan 23 18:31:23.399020 kernel: loop3: detected capacity change from 0 to 219144 Jan 23 18:31:23.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.393000 audit: BPF prog-id=18 op=LOAD Jan 23 18:31:23.393000 audit: BPF prog-id=19 op=LOAD Jan 23 18:31:23.393000 audit: BPF prog-id=20 op=LOAD Jan 23 18:31:23.396000 audit: BPF prog-id=21 op=LOAD Jan 23 18:31:23.365885 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:31:23.389394 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:31:23.396107 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 18:31:23.399711 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:31:23.405090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:31:23.405936 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:31:23.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.409000 audit: BPF prog-id=22 op=LOAD Jan 23 18:31:23.409000 audit: BPF prog-id=23 op=LOAD Jan 23 18:31:23.409000 audit: BPF prog-id=24 op=LOAD Jan 23 18:31:23.412000 audit: BPF prog-id=25 op=LOAD Jan 23 18:31:23.412000 audit: BPF prog-id=26 op=LOAD Jan 23 18:31:23.412000 audit: BPF prog-id=27 op=LOAD Jan 23 18:31:23.411967 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 18:31:23.414077 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:31:23.445976 kernel: loop4: detected capacity change from 0 to 50784 Jan 23 18:31:23.470369 systemd-tmpfiles[1355]: ACLs are not supported, ignoring. Jan 23 18:31:23.471661 systemd-tmpfiles[1355]: ACLs are not supported, ignoring. Jan 23 18:31:23.479572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:31:23.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.491855 systemd-nsresourced[1357]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 18:31:23.494531 kernel: loop5: detected capacity change from 0 to 1656 Jan 23 18:31:23.494053 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:31:23.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.497295 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 18:31:23.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.520976 kernel: loop6: detected capacity change from 0 to 111560 Jan 23 18:31:23.553978 kernel: loop7: detected capacity change from 0 to 219144 Jan 23 18:31:23.592977 kernel: loop1: detected capacity change from 0 to 50784 Jan 23 18:31:23.605663 systemd-oomd[1352]: No swap; memory pressure usage will be degraded Jan 23 18:31:23.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.606123 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 18:31:23.614182 systemd-resolved[1353]: Positive Trust Anchors: Jan 23 18:31:23.614194 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:31:23.614199 systemd-resolved[1353]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:31:23.614314 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:31:23.628082 (sd-merge)[1363]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Jan 23 18:31:23.638971 systemd-resolved[1353]: Using system hostname 'ci-4547-1-0-1-5b0cac0ed6'. Jan 23 18:31:23.640068 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:31:23.640448 (sd-merge)[1363]: Merged extensions into '/usr'. Jan 23 18:31:23.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.640843 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:31:23.645501 systemd[1]: Reload requested from client PID 1314 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:31:23.645513 systemd[1]: Reloading... Jan 23 18:31:23.695707 zram_generator::config[1403]: No configuration found. Jan 23 18:31:23.889515 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:31:23.889866 systemd[1]: Reloading finished in 243 ms. Jan 23 18:31:23.923801 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:31:23.924609 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:31:23.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:23.935035 systemd[1]: Starting ensure-sysext.service... Jan 23 18:31:23.939080 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:31:23.938000 audit: BPF prog-id=8 op=UNLOAD Jan 23 18:31:23.942298 kernel: kauditd_printk_skb: 105 callbacks suppressed Jan 23 18:31:23.942347 kernel: audit: type=1334 audit(1769193083.938:151): prog-id=8 op=UNLOAD Jan 23 18:31:23.942452 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:31:23.938000 audit: BPF prog-id=7 op=UNLOAD Jan 23 18:31:23.943257 kernel: audit: type=1334 audit(1769193083.938:152): prog-id=7 op=UNLOAD Jan 23 18:31:23.939000 audit: BPF prog-id=28 op=LOAD Jan 23 18:31:23.944158 kernel: audit: type=1334 audit(1769193083.939:153): prog-id=28 op=LOAD Jan 23 18:31:23.939000 audit: BPF prog-id=29 op=LOAD Jan 23 18:31:23.945006 kernel: audit: type=1334 audit(1769193083.939:154): prog-id=29 op=LOAD Jan 23 18:31:23.947000 audit: BPF prog-id=30 op=LOAD Jan 23 18:31:23.948975 kernel: audit: type=1334 audit(1769193083.947:155): prog-id=30 op=LOAD Jan 23 18:31:23.948000 audit: BPF prog-id=22 op=UNLOAD Jan 23 18:31:23.949976 kernel: audit: type=1334 audit(1769193083.948:156): prog-id=22 op=UNLOAD Jan 23 18:31:23.949000 audit: BPF prog-id=31 op=LOAD Jan 23 18:31:23.949000 audit: BPF prog-id=32 op=LOAD Jan 23 18:31:23.952152 kernel: audit: type=1334 audit(1769193083.949:157): prog-id=31 op=LOAD Jan 23 18:31:23.952182 kernel: audit: type=1334 audit(1769193083.949:158): prog-id=32 op=LOAD Jan 23 18:31:23.949000 audit: BPF prog-id=23 op=UNLOAD Jan 23 18:31:23.949000 audit: BPF prog-id=24 op=UNLOAD Jan 23 18:31:23.955337 kernel: audit: type=1334 audit(1769193083.949:159): prog-id=23 op=UNLOAD Jan 23 18:31:23.955372 kernel: audit: type=1334 audit(1769193083.949:160): prog-id=24 op=UNLOAD Jan 23 18:31:23.949000 audit: BPF prog-id=33 op=LOAD Jan 23 18:31:23.949000 audit: BPF prog-id=21 op=UNLOAD Jan 23 18:31:23.952000 audit: BPF prog-id=34 op=LOAD Jan 23 18:31:23.952000 audit: BPF prog-id=15 op=UNLOAD Jan 23 18:31:23.952000 audit: BPF prog-id=35 op=LOAD Jan 23 18:31:23.958000 audit: BPF prog-id=36 op=LOAD Jan 23 18:31:23.958000 audit: BPF prog-id=16 op=UNLOAD Jan 23 18:31:23.958000 audit: BPF prog-id=17 op=UNLOAD Jan 23 18:31:23.958000 audit: BPF prog-id=37 op=LOAD Jan 23 18:31:23.958000 audit: BPF prog-id=25 op=UNLOAD Jan 23 18:31:23.958000 audit: BPF prog-id=38 op=LOAD Jan 23 18:31:23.958000 audit: BPF prog-id=39 op=LOAD Jan 23 18:31:23.958000 audit: BPF prog-id=26 op=UNLOAD Jan 23 18:31:23.958000 audit: BPF prog-id=27 op=UNLOAD Jan 23 18:31:23.959000 audit: BPF prog-id=40 op=LOAD Jan 23 18:31:23.959000 audit: BPF prog-id=18 op=UNLOAD Jan 23 18:31:23.959000 audit: BPF prog-id=41 op=LOAD Jan 23 18:31:23.959000 audit: BPF prog-id=42 op=LOAD Jan 23 18:31:23.959000 audit: BPF prog-id=19 op=UNLOAD Jan 23 18:31:23.959000 audit: BPF prog-id=20 op=UNLOAD Jan 23 18:31:23.964265 systemd[1]: Reload requested from client PID 1450 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:31:23.964278 systemd[1]: Reloading... Jan 23 18:31:23.986876 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:31:23.986900 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:31:23.987131 systemd-tmpfiles[1451]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:31:23.988082 systemd-tmpfiles[1451]: ACLs are not supported, ignoring. Jan 23 18:31:23.988130 systemd-tmpfiles[1451]: ACLs are not supported, ignoring. Jan 23 18:31:23.989133 systemd-udevd[1452]: Using default interface naming scheme 'v257'. Jan 23 18:31:23.998291 systemd-tmpfiles[1451]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:31:23.998301 systemd-tmpfiles[1451]: Skipping /boot Jan 23 18:31:24.008306 systemd-tmpfiles[1451]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:31:24.008317 systemd-tmpfiles[1451]: Skipping /boot Jan 23 18:31:24.035988 zram_generator::config[1484]: No configuration found. Jan 23 18:31:24.184994 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:31:24.205978 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 18:31:24.216021 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:31:24.285136 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 23 18:31:24.289995 kernel: Console: switching to colour dummy device 80x25 Jan 23 18:31:24.292160 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 23 18:31:24.292387 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 18:31:24.292402 kernel: [drm] features: -context_init Jan 23 18:31:24.300286 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 18:31:24.300516 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:31:24.300650 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:31:24.302614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:31:24.303025 kernel: [drm] number of scanouts: 1 Jan 23 18:31:24.303044 kernel: [drm] number of cap sets: 0 Jan 23 18:31:24.303912 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:31:24.304150 systemd[1]: Reloading finished in 339 ms. Jan 23 18:31:24.312011 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:31:24.312973 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 18:31:24.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:24.313000 audit: BPF prog-id=43 op=LOAD Jan 23 18:31:24.313000 audit: BPF prog-id=34 op=UNLOAD Jan 23 18:31:24.313000 audit: BPF prog-id=44 op=LOAD Jan 23 18:31:24.313000 audit: BPF prog-id=45 op=LOAD Jan 23 18:31:24.313000 audit: BPF prog-id=35 op=UNLOAD Jan 23 18:31:24.313000 audit: BPF prog-id=36 op=UNLOAD Jan 23 18:31:24.314000 audit: BPF prog-id=46 op=LOAD Jan 23 18:31:24.314000 audit: BPF prog-id=47 op=LOAD Jan 23 18:31:24.314000 audit: BPF prog-id=28 op=UNLOAD Jan 23 18:31:24.314000 audit: BPF prog-id=29 op=UNLOAD Jan 23 18:31:24.314000 audit: BPF prog-id=48 op=LOAD Jan 23 18:31:24.314000 audit: BPF prog-id=37 op=UNLOAD Jan 23 18:31:24.314000 audit: BPF prog-id=49 op=LOAD Jan 23 18:31:24.314000 audit: BPF prog-id=50 op=LOAD Jan 23 18:31:24.314000 audit: BPF prog-id=38 op=UNLOAD Jan 23 18:31:24.314000 audit: BPF prog-id=39 op=UNLOAD Jan 23 18:31:24.317000 audit: BPF prog-id=51 op=LOAD Jan 23 18:31:24.317000 audit: BPF prog-id=33 op=UNLOAD Jan 23 18:31:24.317000 audit: BPF prog-id=52 op=LOAD Jan 23 18:31:24.317000 audit: BPF prog-id=30 op=UNLOAD Jan 23 18:31:24.317000 audit: BPF prog-id=53 op=LOAD Jan 23 18:31:24.317000 audit: BPF prog-id=54 op=LOAD Jan 23 18:31:24.317000 audit: BPF prog-id=31 op=UNLOAD Jan 23 18:31:24.317000 audit: BPF prog-id=32 op=UNLOAD Jan 23 18:31:24.318000 audit: BPF prog-id=55 op=LOAD Jan 23 18:31:24.318000 audit: BPF prog-id=40 op=UNLOAD Jan 23 18:31:24.318000 audit: BPF prog-id=56 op=LOAD Jan 23 18:31:24.318000 audit: BPF prog-id=57 op=LOAD Jan 23 18:31:24.318000 audit: BPF prog-id=41 op=UNLOAD Jan 23 18:31:24.318000 audit: BPF prog-id=42 op=UNLOAD Jan 23 18:31:24.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:24.321291 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:31:24.351247 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:31:24.353194 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:31:24.356149 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:31:24.358248 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:31:24.365197 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:31:24.365000 audit: BPF prog-id=58 op=LOAD Jan 23 18:31:24.366774 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:31:24.373202 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:31:24.394198 systemd[1]: Finished ensure-sysext.service. Jan 23 18:31:24.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:24.397610 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:31:24.399330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:31:24.403168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:31:24.406130 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:31:24.419319 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:31:24.420000 audit[1577]: SYSTEM_BOOT pid=1577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 18:31:24.422328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:31:24.432554 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 23 18:31:24.433016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:31:24.433126 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:31:24.433161 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:31:24.433228 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:31:24.433718 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:31:24.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:24.442240 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:31:24.470848 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 23 18:31:24.470912 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:31:24.471078 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:31:24.511917 systemd-networkd[1576]: lo: Link UP Jan 23 18:31:24.511927 systemd-networkd[1576]: lo: Gained carrier Jan 23 18:31:24.518723 systemd-networkd[1576]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:31:24.518732 systemd-networkd[1576]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:31:24.520732 systemd-networkd[1576]: eth0: Link UP Jan 23 18:31:24.520877 systemd-networkd[1576]: eth0: Gained carrier Jan 23 18:31:24.520897 systemd-networkd[1576]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:31:24.534788 systemd-networkd[1576]: eth0: DHCPv4 address 10.0.6.238/25, gateway 10.0.6.129 acquired from 10.0.6.129 Jan 23 18:31:24.557000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 18:31:24.557000 audit[1608]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe3657d7b0 a2=420 a3=0 items=0 ppid=1569 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:31:24.557000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:31:24.559812 augenrules[1608]: No rules Jan 23 18:31:24.571969 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 18:31:24.577558 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:31:24.578811 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:31:24.579234 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:31:24.580827 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:31:24.582038 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:31:24.583932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:31:24.584811 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:31:24.585420 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:31:24.594089 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:31:24.603102 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:31:24.603302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:31:24.606828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:31:24.607996 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:31:24.626724 systemd[1]: Reached target network.target - Network. Jan 23 18:31:24.631077 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:31:24.632349 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:31:24.633488 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:31:24.633632 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:31:24.638121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:31:24.638837 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:31:24.681612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:31:24.682315 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:31:24.689401 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:31:24.695214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:31:24.752884 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 18:31:24.752975 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 18:31:24.759979 kernel: PTP clock support registered Jan 23 18:31:24.765678 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 23 18:31:24.766745 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 23 18:31:24.810091 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:31:25.192584 ldconfig[1571]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:31:25.202128 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:31:25.205495 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:31:25.232386 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:31:25.234624 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:31:25.237212 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:31:25.238168 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:31:25.238702 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:31:25.240590 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:31:25.241197 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:31:25.241757 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 18:31:25.244314 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 18:31:25.244794 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:31:25.245314 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:31:25.245360 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:31:25.245830 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:31:25.248796 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:31:25.251359 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:31:25.256370 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:31:25.257970 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:31:25.259146 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:31:25.269856 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:31:25.272462 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:31:25.274403 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:31:25.277362 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:31:25.278292 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:31:25.281353 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:31:25.281404 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:31:25.284537 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 18:31:25.288078 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:31:25.292105 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:31:25.294058 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:31:25.298774 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:31:25.307145 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:31:25.312362 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:31:25.313556 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:31:25.318988 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:25.319204 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:31:25.326182 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:31:25.333851 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:31:25.340368 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:31:25.346154 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:31:25.354153 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:31:25.356091 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:31:25.358649 jq[1646]: false Jan 23 18:31:25.364192 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:31:25.365936 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:31:25.378164 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:31:25.379179 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Refreshing passwd entry cache Jan 23 18:31:25.381985 oslogin_cache_refresh[1649]: Refreshing passwd entry cache Jan 23 18:31:25.383145 extend-filesystems[1647]: Found /dev/vda6 Jan 23 18:31:25.388945 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:31:25.390264 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:31:25.390471 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:31:25.392261 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:31:25.395287 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:31:25.414119 extend-filesystems[1647]: Found /dev/vda9 Jan 23 18:31:25.414119 extend-filesystems[1647]: Checking size of /dev/vda9 Jan 23 18:31:25.419836 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Failure getting users, quitting Jan 23 18:31:25.419836 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:31:25.419836 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Refreshing group entry cache Jan 23 18:31:25.410101 oslogin_cache_refresh[1649]: Failure getting users, quitting Jan 23 18:31:25.414306 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:31:25.410118 oslogin_cache_refresh[1649]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:31:25.414554 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:31:25.410164 oslogin_cache_refresh[1649]: Refreshing group entry cache Jan 23 18:31:25.424974 extend-filesystems[1647]: Resized partition /dev/vda9 Jan 23 18:31:25.426066 extend-filesystems[1689]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:31:25.431704 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Failure getting groups, quitting Jan 23 18:31:25.431704 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:31:25.431776 jq[1663]: true Jan 23 18:31:25.427431 oslogin_cache_refresh[1649]: Failure getting groups, quitting Jan 23 18:31:25.429364 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:31:25.427443 oslogin_cache_refresh[1649]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:31:25.429680 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:31:25.434154 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Jan 23 18:31:25.455506 dbus-daemon[1644]: [system] SELinux support is enabled Jan 23 18:31:25.455723 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:31:25.461864 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:31:25.461892 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:31:25.464224 update_engine[1660]: I20260123 18:31:25.463341 1660 main.cc:92] Flatcar Update Engine starting Jan 23 18:31:25.462885 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:31:25.462905 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:31:25.469050 jq[1692]: true Jan 23 18:31:25.476674 tar[1670]: linux-amd64/LICENSE Jan 23 18:31:25.476674 tar[1670]: linux-amd64/helm Jan 23 18:31:25.476944 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:31:25.477264 update_engine[1660]: I20260123 18:31:25.477032 1660 update_check_scheduler.cc:74] Next update check in 3m54s Jan 23 18:31:25.498250 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:31:25.513819 chronyd[1641]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 18:31:25.514630 chronyd[1641]: Loaded seccomp filter (level 2) Jan 23 18:31:25.514811 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 18:31:25.605271 systemd-logind[1655]: New seat seat0. Jan 23 18:31:25.613089 systemd-networkd[1576]: eth0: Gained IPv6LL Jan 23 18:31:25.632418 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:31:25.635683 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:31:25.638727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:31:25.642426 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:31:25.651895 locksmithd[1699]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:31:25.694137 systemd-logind[1655]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 18:31:25.694157 systemd-logind[1655]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:31:25.694422 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:31:25.729342 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:31:25.767993 containerd[1695]: time="2026-01-23T18:31:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:31:25.789347 bash[1713]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:31:25.795060 containerd[1695]: time="2026-01-23T18:31:25.794149945Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 18:31:25.795157 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:31:25.801670 systemd[1]: Starting sshkeys.service... Jan 23 18:31:25.815545 containerd[1695]: time="2026-01-23T18:31:25.814461246Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.834µs" Jan 23 18:31:25.815545 containerd[1695]: time="2026-01-23T18:31:25.814498425Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:31:25.815545 containerd[1695]: time="2026-01-23T18:31:25.814558027Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:31:25.815545 containerd[1695]: time="2026-01-23T18:31:25.814574345Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:31:25.815545 containerd[1695]: time="2026-01-23T18:31:25.814728114Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:31:25.815545 containerd[1695]: time="2026-01-23T18:31:25.814745785Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:31:25.815545 containerd[1695]: time="2026-01-23T18:31:25.814795615Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:31:25.815545 containerd[1695]: time="2026-01-23T18:31:25.814836079Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:31:25.817981 containerd[1695]: time="2026-01-23T18:31:25.816889640Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:31:25.817981 containerd[1695]: time="2026-01-23T18:31:25.816914952Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:31:25.817981 containerd[1695]: time="2026-01-23T18:31:25.816953045Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:31:25.817981 containerd[1695]: time="2026-01-23T18:31:25.816969976Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:31:25.817981 containerd[1695]: time="2026-01-23T18:31:25.817115791Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:31:25.817981 containerd[1695]: time="2026-01-23T18:31:25.817125496Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:31:25.817981 containerd[1695]: time="2026-01-23T18:31:25.817184509Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:31:25.817981 containerd[1695]: time="2026-01-23T18:31:25.817336172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:31:25.817981 containerd[1695]: time="2026-01-23T18:31:25.817363863Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:31:25.817981 containerd[1695]: time="2026-01-23T18:31:25.817372722Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:31:25.817981 containerd[1695]: time="2026-01-23T18:31:25.817396858Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:31:25.818231 containerd[1695]: time="2026-01-23T18:31:25.817606005Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:31:25.818231 containerd[1695]: time="2026-01-23T18:31:25.817663506Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:31:25.828591 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 18:31:25.834222 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 18:31:25.848980 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:25.873646 containerd[1695]: time="2026-01-23T18:31:25.873500433Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:31:25.873646 containerd[1695]: time="2026-01-23T18:31:25.873557808Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876060238Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876083302Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876099121Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876110754Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876120747Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876129767Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876140461Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876150930Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876161239Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876172116Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876185296Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876200634Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:31:25.877309 containerd[1695]: time="2026-01-23T18:31:25.876314396Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876333132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876348088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876358316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876368809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876377531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876389313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876399720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876409323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876418769Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876434155Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876461559Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876505088Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876516946Z" level=info msg="Start snapshots syncer" Jan 23 18:31:25.877581 containerd[1695]: time="2026-01-23T18:31:25.876535569Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:31:25.877806 containerd[1695]: time="2026-01-23T18:31:25.876753592Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:31:25.877806 containerd[1695]: time="2026-01-23T18:31:25.876811623Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:31:25.877917 containerd[1695]: time="2026-01-23T18:31:25.876848280Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:31:25.877917 containerd[1695]: time="2026-01-23T18:31:25.876927817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:31:25.877917 containerd[1695]: time="2026-01-23T18:31:25.876944168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:31:25.877917 containerd[1695]: time="2026-01-23T18:31:25.876953385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:31:25.877917 containerd[1695]: time="2026-01-23T18:31:25.877171762Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:31:25.877917 containerd[1695]: time="2026-01-23T18:31:25.877184894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:31:25.877917 containerd[1695]: time="2026-01-23T18:31:25.877194721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:31:25.877917 containerd[1695]: time="2026-01-23T18:31:25.877204984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:31:25.877917 containerd[1695]: time="2026-01-23T18:31:25.877254726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:31:25.877917 containerd[1695]: time="2026-01-23T18:31:25.877265057Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878121543Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878141713Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878149870Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878159712Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878215971Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878230942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878252413Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878266865Z" level=info msg="runtime interface created" Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878271821Z" level=info msg="created NRI interface" Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878280028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878290806Z" level=info msg="Connect containerd service" Jan 23 18:31:25.878807 containerd[1695]: time="2026-01-23T18:31:25.878312018Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:31:25.879429 containerd[1695]: time="2026-01-23T18:31:25.879412811Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:31:26.007311 sshd_keygen[1691]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:31:26.038005 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:31:26.041474 containerd[1695]: time="2026-01-23T18:31:26.041448333Z" level=info msg="Start subscribing containerd event" Jan 23 18:31:26.041587 containerd[1695]: time="2026-01-23T18:31:26.041566514Z" level=info msg="Start recovering state" Jan 23 18:31:26.041711 containerd[1695]: time="2026-01-23T18:31:26.041606012Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:31:26.041757 containerd[1695]: time="2026-01-23T18:31:26.041746451Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:31:26.041914 containerd[1695]: time="2026-01-23T18:31:26.041904213Z" level=info msg="Start event monitor" Jan 23 18:31:26.041951 containerd[1695]: time="2026-01-23T18:31:26.041945000Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:31:26.041998 containerd[1695]: time="2026-01-23T18:31:26.041992139Z" level=info msg="Start streaming server" Jan 23 18:31:26.042034 containerd[1695]: time="2026-01-23T18:31:26.042023979Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:31:26.042112 containerd[1695]: time="2026-01-23T18:31:26.042103021Z" level=info msg="runtime interface starting up..." Jan 23 18:31:26.042152 containerd[1695]: time="2026-01-23T18:31:26.042146506Z" level=info msg="starting plugins..." Jan 23 18:31:26.042191 containerd[1695]: time="2026-01-23T18:31:26.042184076Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:31:26.051010 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:31:26.064814 containerd[1695]: time="2026-01-23T18:31:26.064617250Z" level=info msg="containerd successfully booted in 0.299704s" Jan 23 18:31:26.076757 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:31:26.082166 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:31:26.086387 systemd[1]: Started sshd@0-10.0.6.238:22-68.220.241.50:52522.service - OpenSSH per-connection server daemon (68.220.241.50:52522). Jan 23 18:31:26.103978 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Jan 23 18:31:26.113997 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:31:26.114227 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:31:26.120090 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:31:26.141038 extend-filesystems[1689]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 18:31:26.141038 extend-filesystems[1689]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 23 18:31:26.141038 extend-filesystems[1689]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Jan 23 18:31:26.146357 extend-filesystems[1647]: Resized filesystem in /dev/vda9 Jan 23 18:31:26.141998 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:31:26.142240 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:31:26.147097 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:31:26.154217 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:31:26.159004 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:31:26.161865 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:31:26.321881 tar[1670]: linux-amd64/README.md Jan 23 18:31:26.342596 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:31:26.359979 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:26.659529 sshd[1766]: Accepted publickey for core from 68.220.241.50 port 52522 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:31:26.665135 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:31:26.680795 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:31:26.683333 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:31:26.696733 systemd-logind[1655]: New session 1 of user core. Jan 23 18:31:26.718010 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:31:26.725367 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:31:26.752256 (systemd)[1785]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:31:26.758468 systemd-logind[1655]: New session 2 of user core. Jan 23 18:31:26.864978 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:26.885349 systemd[1785]: Queued start job for default target default.target. Jan 23 18:31:26.891888 systemd[1785]: Created slice app.slice - User Application Slice. Jan 23 18:31:26.891917 systemd[1785]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 18:31:26.891929 systemd[1785]: Reached target paths.target - Paths. Jan 23 18:31:26.892154 systemd[1785]: Reached target timers.target - Timers. Jan 23 18:31:26.893422 systemd[1785]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:31:26.897089 systemd[1785]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 18:31:26.907110 systemd[1785]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 18:31:26.916653 systemd[1785]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:31:26.916751 systemd[1785]: Reached target sockets.target - Sockets. Jan 23 18:31:26.916786 systemd[1785]: Reached target basic.target - Basic System. Jan 23 18:31:26.916819 systemd[1785]: Reached target default.target - Main User Target. Jan 23 18:31:26.916844 systemd[1785]: Startup finished in 146ms. Jan 23 18:31:26.917069 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:31:26.924228 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:31:27.133658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:31:27.142175 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:31:27.237314 systemd[1]: Started sshd@1-10.0.6.238:22-68.220.241.50:52536.service - OpenSSH per-connection server daemon (68.220.241.50:52536). Jan 23 18:31:27.791653 sshd[1806]: Accepted publickey for core from 68.220.241.50 port 52536 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:31:27.795180 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:31:27.808046 systemd-logind[1655]: New session 3 of user core. Jan 23 18:31:27.813358 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:31:27.975940 kubelet[1804]: E0123 18:31:27.975890 1804 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:31:27.978494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:31:27.978746 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:31:27.979542 systemd[1]: kubelet.service: Consumed 1.109s CPU time, 256.7M memory peak. Jan 23 18:31:28.092881 sshd[1814]: Connection closed by 68.220.241.50 port 52536 Jan 23 18:31:28.092690 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Jan 23 18:31:28.099076 systemd[1]: sshd@1-10.0.6.238:22-68.220.241.50:52536.service: Deactivated successfully. Jan 23 18:31:28.102324 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:31:28.105333 systemd-logind[1655]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:31:28.107042 systemd-logind[1655]: Removed session 3. Jan 23 18:31:28.211067 systemd[1]: Started sshd@2-10.0.6.238:22-68.220.241.50:52542.service - OpenSSH per-connection server daemon (68.220.241.50:52542). Jan 23 18:31:28.367021 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:28.793693 sshd[1822]: Accepted publickey for core from 68.220.241.50 port 52542 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:31:28.797041 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:31:28.810204 systemd-logind[1655]: New session 4 of user core. Jan 23 18:31:28.818599 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:31:28.882035 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:29.097253 sshd[1827]: Connection closed by 68.220.241.50 port 52542 Jan 23 18:31:29.098428 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jan 23 18:31:29.107712 systemd[1]: sshd@2-10.0.6.238:22-68.220.241.50:52542.service: Deactivated successfully. Jan 23 18:31:29.112780 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:31:29.115533 systemd-logind[1655]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:31:29.120693 systemd-logind[1655]: Removed session 4. Jan 23 18:31:32.391985 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:32.408393 coreos-metadata[1643]: Jan 23 18:31:32.408 WARN failed to locate config-drive, using the metadata service API instead Jan 23 18:31:32.449472 coreos-metadata[1643]: Jan 23 18:31:32.449 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 18:31:32.912008 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:31:32.925270 coreos-metadata[1741]: Jan 23 18:31:32.925 WARN failed to locate config-drive, using the metadata service API instead Jan 23 18:31:32.946361 coreos-metadata[1741]: Jan 23 18:31:32.946 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 18:31:33.775731 coreos-metadata[1643]: Jan 23 18:31:33.775 INFO Fetch successful Jan 23 18:31:33.775731 coreos-metadata[1643]: Jan 23 18:31:33.775 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 18:31:34.339861 coreos-metadata[1741]: Jan 23 18:31:34.339 INFO Fetch successful Jan 23 18:31:34.339861 coreos-metadata[1741]: Jan 23 18:31:34.339 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 18:31:34.932747 coreos-metadata[1643]: Jan 23 18:31:34.932 INFO Fetch successful Jan 23 18:31:34.932747 coreos-metadata[1643]: Jan 23 18:31:34.932 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 18:31:35.466329 coreos-metadata[1741]: Jan 23 18:31:35.466 INFO Fetch successful Jan 23 18:31:35.470486 unknown[1741]: wrote ssh authorized keys file for user: core Jan 23 18:31:35.519696 update-ssh-keys[1842]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:31:35.520425 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 18:31:35.522842 systemd[1]: Finished sshkeys.service. Jan 23 18:31:36.554646 coreos-metadata[1643]: Jan 23 18:31:36.554 INFO Fetch successful Jan 23 18:31:36.554646 coreos-metadata[1643]: Jan 23 18:31:36.554 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 18:31:37.128701 coreos-metadata[1643]: Jan 23 18:31:37.127 INFO Fetch successful Jan 23 18:31:37.129698 coreos-metadata[1643]: Jan 23 18:31:37.128 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 18:31:37.736231 coreos-metadata[1643]: Jan 23 18:31:37.736 INFO Fetch successful Jan 23 18:31:37.736231 coreos-metadata[1643]: Jan 23 18:31:37.736 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 18:31:38.103489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:31:38.109343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:31:38.282307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:31:38.289356 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:31:38.330494 kubelet[1853]: E0123 18:31:38.330456 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:31:38.336041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:31:38.336294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:31:38.337305 systemd[1]: kubelet.service: Consumed 174ms CPU time, 108.4M memory peak. Jan 23 18:31:39.291329 coreos-metadata[1643]: Jan 23 18:31:39.291 INFO Fetch successful Jan 23 18:31:39.342125 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:31:39.343179 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:31:39.343658 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:31:39.344136 systemd[1]: Startup finished in 4.006s (kernel) + 15.179s (initrd) + 17.289s (userspace) = 36.476s. Jan 23 18:31:39.744448 systemd[1]: Started sshd@3-10.0.6.238:22-68.220.241.50:54942.service - OpenSSH per-connection server daemon (68.220.241.50:54942). Jan 23 18:31:40.574020 sshd[1866]: Accepted publickey for core from 68.220.241.50 port 54942 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:31:40.575051 sshd-session[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:31:40.588490 systemd-logind[1655]: New session 5 of user core. Jan 23 18:31:40.597415 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:31:41.394504 sshd[1870]: Connection closed by 68.220.241.50 port 54942 Jan 23 18:31:41.395579 sshd-session[1866]: pam_unix(sshd:session): session closed for user core Jan 23 18:31:41.404842 systemd[1]: sshd@3-10.0.6.238:22-68.220.241.50:54942.service: Deactivated successfully. Jan 23 18:31:41.409112 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:31:41.413094 systemd-logind[1655]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:31:41.414744 systemd-logind[1655]: Removed session 5. Jan 23 18:31:41.857906 systemd[1]: Started sshd@4-10.0.6.238:22-68.220.241.50:37830.service - OpenSSH per-connection server daemon (68.220.241.50:37830). Jan 23 18:31:43.009041 sshd[1876]: Accepted publickey for core from 68.220.241.50 port 37830 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:31:43.011717 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:31:43.024333 systemd-logind[1655]: New session 6 of user core. Jan 23 18:31:43.035432 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:31:43.467436 sshd[1880]: Connection closed by 68.220.241.50 port 37830 Jan 23 18:31:43.468606 sshd-session[1876]: pam_unix(sshd:session): session closed for user core Jan 23 18:31:43.478497 systemd[1]: sshd@4-10.0.6.238:22-68.220.241.50:37830.service: Deactivated successfully. Jan 23 18:31:43.483834 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:31:43.486374 systemd-logind[1655]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:31:43.489552 systemd-logind[1655]: Removed session 6. Jan 23 18:31:43.676645 systemd[1]: Started sshd@5-10.0.6.238:22-68.220.241.50:37846.service - OpenSSH per-connection server daemon (68.220.241.50:37846). Jan 23 18:31:48.352732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:31:48.356453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:31:48.543783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:31:48.554433 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:31:48.592059 kubelet[1897]: E0123 18:31:48.592020 1897 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:31:48.594297 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:31:48.594483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:31:48.595111 systemd[1]: kubelet.service: Consumed 178ms CPU time, 108.1M memory peak. Jan 23 18:31:49.487084 chronyd[1641]: Selected source PHC0 Jan 23 18:31:49.487108 chronyd[1641]: System clock wrong by 1.780858 seconds Jan 23 18:31:51.267990 chronyd[1641]: System clock was stepped by 1.780858 seconds Jan 23 18:31:51.268495 systemd-resolved[1353]: Clock change detected. Flushing caches. Jan 23 18:31:55.508797 sshd[1886]: Accepted publickey for core from 68.220.241.50 port 37846 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:31:55.511378 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:31:55.522918 systemd-logind[1655]: New session 7 of user core. Jan 23 18:31:55.526066 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:31:55.920831 sshd[1905]: Connection closed by 68.220.241.50 port 37846 Jan 23 18:31:55.921555 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Jan 23 18:31:55.934712 systemd[1]: sshd@5-10.0.6.238:22-68.220.241.50:37846.service: Deactivated successfully. Jan 23 18:31:55.938016 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:31:55.939709 systemd-logind[1655]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:31:55.943163 systemd-logind[1655]: Removed session 7. Jan 23 18:31:55.945756 systemd[1]: Started sshd@6-10.0.6.238:22-68.220.241.50:52992.service - OpenSSH per-connection server daemon (68.220.241.50:52992). Jan 23 18:31:56.513373 sshd[1911]: Accepted publickey for core from 68.220.241.50 port 52992 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:31:56.516223 sshd-session[1911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:31:56.530935 systemd-logind[1655]: New session 8 of user core. Jan 23 18:31:56.534185 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:31:56.760971 sudo[1916]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:31:56.761257 sudo[1916]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:31:56.778662 sudo[1916]: pam_unix(sudo:session): session closed for user root Jan 23 18:31:56.878861 sshd[1915]: Connection closed by 68.220.241.50 port 52992 Jan 23 18:31:56.878421 sshd-session[1911]: pam_unix(sshd:session): session closed for user core Jan 23 18:31:56.887391 systemd[1]: sshd@6-10.0.6.238:22-68.220.241.50:52992.service: Deactivated successfully. Jan 23 18:31:56.892131 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:31:56.897183 systemd-logind[1655]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:31:56.899303 systemd-logind[1655]: Removed session 8. Jan 23 18:31:56.998456 systemd[1]: Started sshd@7-10.0.6.238:22-68.220.241.50:53002.service - OpenSSH per-connection server daemon (68.220.241.50:53002). Jan 23 18:31:57.577901 sshd[1923]: Accepted publickey for core from 68.220.241.50 port 53002 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:31:57.580545 sshd-session[1923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:31:57.590036 systemd-logind[1655]: New session 9 of user core. Jan 23 18:31:57.599131 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:31:57.790990 sudo[1929]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:31:57.791768 sudo[1929]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:31:57.798392 sudo[1929]: pam_unix(sudo:session): session closed for user root Jan 23 18:31:57.813976 sudo[1928]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:31:57.814548 sudo[1928]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:31:57.838161 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:31:57.921000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 18:31:57.923560 kernel: kauditd_printk_skb: 59 callbacks suppressed Jan 23 18:31:57.923682 kernel: audit: type=1305 audit(1769193117.921:218): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 18:31:57.926980 augenrules[1953]: No rules Jan 23 18:31:57.921000 audit[1953]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdb3a27da0 a2=420 a3=0 items=0 ppid=1934 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:31:57.934949 kernel: audit: type=1300 audit(1769193117.921:218): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdb3a27da0 a2=420 a3=0 items=0 ppid=1934 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:31:57.935688 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:31:57.936259 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:31:57.939133 sudo[1928]: pam_unix(sudo:session): session closed for user root Jan 23 18:31:57.921000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:31:57.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:57.947831 kernel: audit: type=1327 audit(1769193117.921:218): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:31:57.947903 kernel: audit: type=1130 audit(1769193117.935:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:57.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:57.951681 kernel: audit: type=1131 audit(1769193117.935:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:57.951735 kernel: audit: type=1106 audit(1769193117.938:221): pid=1928 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:31:57.938000 audit[1928]: USER_END pid=1928 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:31:57.938000 audit[1928]: CRED_DISP pid=1928 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:31:57.955938 kernel: audit: type=1104 audit(1769193117.938:222): pid=1928 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:31:58.037074 sshd[1927]: Connection closed by 68.220.241.50 port 53002 Jan 23 18:31:58.036910 sshd-session[1923]: pam_unix(sshd:session): session closed for user core Jan 23 18:31:58.040000 audit[1923]: USER_END pid=1923 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:31:58.040000 audit[1923]: CRED_DISP pid=1923 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:31:58.058991 kernel: audit: type=1106 audit(1769193118.040:223): pid=1923 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:31:58.059099 kernel: audit: type=1104 audit(1769193118.040:224): pid=1923 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:31:58.059854 systemd[1]: sshd@7-10.0.6.238:22-68.220.241.50:53002.service: Deactivated successfully. Jan 23 18:31:58.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.6.238:22-68.220.241.50:53002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:58.064348 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:31:58.067855 kernel: audit: type=1131 audit(1769193118.059:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.6.238:22-68.220.241.50:53002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:58.070357 systemd-logind[1655]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:31:58.072796 systemd-logind[1655]: Removed session 9. Jan 23 18:31:58.146772 systemd[1]: Started sshd@8-10.0.6.238:22-68.220.241.50:53016.service - OpenSSH per-connection server daemon (68.220.241.50:53016). Jan 23 18:31:58.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.6.238:22-68.220.241.50:53016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:31:58.702000 audit[1962]: USER_ACCT pid=1962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:31:58.705712 sshd[1962]: Accepted publickey for core from 68.220.241.50 port 53016 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:31:58.704000 audit[1962]: CRED_ACQ pid=1962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:31:58.705000 audit[1962]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0e4504f0 a2=3 a3=0 items=0 ppid=1 pid=1962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:31:58.705000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:31:58.707086 sshd-session[1962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:31:58.718172 systemd-logind[1655]: New session 10 of user core. Jan 23 18:31:58.731172 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:31:58.736000 audit[1962]: USER_START pid=1962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:31:58.741000 audit[1966]: CRED_ACQ pid=1966 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:31:58.915000 audit[1967]: USER_ACCT pid=1967 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:31:58.916593 sudo[1967]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:31:58.916000 audit[1967]: CRED_REFR pid=1967 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:31:58.918456 sudo[1967]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:31:58.918000 audit[1967]: USER_START pid=1967 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:31:59.465201 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:31:59.478076 (dockerd)[1985]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:31:59.929037 dockerd[1985]: time="2026-01-23T18:31:59.928974716Z" level=info msg="Starting up" Jan 23 18:31:59.933691 dockerd[1985]: time="2026-01-23T18:31:59.933667832Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:31:59.952075 dockerd[1985]: time="2026-01-23T18:31:59.952014427Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:32:00.017910 dockerd[1985]: time="2026-01-23T18:32:00.017876349Z" level=info msg="Loading containers: start." Jan 23 18:32:00.039860 kernel: Initializing XFRM netlink socket Jan 23 18:32:00.114000 audit[2035]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.114000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe5abf5a70 a2=0 a3=0 items=0 ppid=1985 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.114000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:32:00.117000 audit[2037]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.117000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffedc162b20 a2=0 a3=0 items=0 ppid=1985 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.117000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:32:00.120000 audit[2039]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.120000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd10360cf0 a2=0 a3=0 items=0 ppid=1985 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.120000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:32:00.123000 audit[2041]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.123000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6350f3c0 a2=0 a3=0 items=0 ppid=1985 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.123000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 18:32:00.126000 audit[2043]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.126000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4bfca970 a2=0 a3=0 items=0 ppid=1985 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 18:32:00.128000 audit[2045]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.128000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc0ffe12b0 a2=0 a3=0 items=0 ppid=1985 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:32:00.130000 audit[2047]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.130000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc73ab0940 a2=0 a3=0 items=0 ppid=1985 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.130000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:32:00.133000 audit[2049]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.133000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd7bed6ad0 a2=0 a3=0 items=0 ppid=1985 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.133000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 18:32:00.175000 audit[2052]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.175000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdec6cf500 a2=0 a3=0 items=0 ppid=1985 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 23 18:32:00.177000 audit[2054]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.177000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffd0a07840 a2=0 a3=0 items=0 ppid=1985 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 18:32:00.181000 audit[2056]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.181000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcc380b2a0 a2=0 a3=0 items=0 ppid=1985 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 18:32:00.183000 audit[2058]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.183000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffea3701960 a2=0 a3=0 items=0 ppid=1985 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:32:00.186000 audit[2060]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.186000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd01864750 a2=0 a3=0 items=0 ppid=1985 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.186000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 18:32:00.226000 audit[2090]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.226000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc2b06b7e0 a2=0 a3=0 items=0 ppid=1985 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:32:00.228000 audit[2092]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.228000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd0ab39080 a2=0 a3=0 items=0 ppid=1985 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:32:00.230000 audit[2094]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.230000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe104ced30 a2=0 a3=0 items=0 ppid=1985 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.230000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:32:00.232000 audit[2096]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.232000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe35528ac0 a2=0 a3=0 items=0 ppid=1985 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 18:32:00.234000 audit[2098]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.234000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd8633bbd0 a2=0 a3=0 items=0 ppid=1985 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.234000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 18:32:00.236000 audit[2100]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.236000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd685f3d20 a2=0 a3=0 items=0 ppid=1985 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.236000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:32:00.238000 audit[2102]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.238000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe32ee8880 a2=0 a3=0 items=0 ppid=1985 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.238000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:32:00.240000 audit[2104]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.240000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffea5b9720 a2=0 a3=0 items=0 ppid=1985 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.240000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 18:32:00.242000 audit[2106]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.242000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe58a95a20 a2=0 a3=0 items=0 ppid=1985 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 23 18:32:00.243000 audit[2108]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.243000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffde90a94c0 a2=0 a3=0 items=0 ppid=1985 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 18:32:00.245000 audit[2110]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.245000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc77ee1560 a2=0 a3=0 items=0 ppid=1985 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 18:32:00.247000 audit[2112]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.247000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc7467f740 a2=0 a3=0 items=0 ppid=1985 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.247000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:32:00.248000 audit[2114]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.248000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff90c388a0 a2=0 a3=0 items=0 ppid=1985 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.248000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 18:32:00.253000 audit[2119]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.253000 audit[2119]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffcd0ad790 a2=0 a3=0 items=0 ppid=1985 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.253000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 18:32:00.255000 audit[2121]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.255000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd94cbfe10 a2=0 a3=0 items=0 ppid=1985 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.255000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 18:32:00.257000 audit[2123]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.257000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd98b60b40 a2=0 a3=0 items=0 ppid=1985 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 18:32:00.259000 audit[2125]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.259000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd82c9bd50 a2=0 a3=0 items=0 ppid=1985 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.259000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 18:32:00.260000 audit[2127]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.260000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc0d3cb9a0 a2=0 a3=0 items=0 ppid=1985 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 18:32:00.262000 audit[2129]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:00.262000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff2900d7c0 a2=0 a3=0 items=0 ppid=1985 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 18:32:00.293000 audit[2134]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.293000 audit[2134]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffcaf2a9130 a2=0 a3=0 items=0 ppid=1985 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.293000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 23 18:32:00.296000 audit[2136]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.296000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcb84115e0 a2=0 a3=0 items=0 ppid=1985 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.296000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 23 18:32:00.305000 audit[2144]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.305000 audit[2144]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc786b06c0 a2=0 a3=0 items=0 ppid=1985 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.305000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 23 18:32:00.317000 audit[2150]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.317000 audit[2150]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd04d45af0 a2=0 a3=0 items=0 ppid=1985 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.317000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 23 18:32:00.320000 audit[2152]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.320000 audit[2152]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff76f71fc0 a2=0 a3=0 items=0 ppid=1985 pid=2152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.320000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 23 18:32:00.322000 audit[2154]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.322000 audit[2154]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd729704b0 a2=0 a3=0 items=0 ppid=1985 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.322000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 23 18:32:00.324000 audit[2156]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.324000 audit[2156]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd1cf87540 a2=0 a3=0 items=0 ppid=1985 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.324000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:32:00.326000 audit[2158]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:00.326000 audit[2158]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd24f3be90 a2=0 a3=0 items=0 ppid=1985 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:00.326000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 23 18:32:00.327834 systemd-networkd[1576]: docker0: Link UP Jan 23 18:32:00.335080 dockerd[1985]: time="2026-01-23T18:32:00.335012041Z" level=info msg="Loading containers: done." Jan 23 18:32:00.349012 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3095784668-merged.mount: Deactivated successfully. Jan 23 18:32:00.359102 dockerd[1985]: time="2026-01-23T18:32:00.358809636Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:32:00.359102 dockerd[1985]: time="2026-01-23T18:32:00.358903289Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:32:00.359102 dockerd[1985]: time="2026-01-23T18:32:00.358971197Z" level=info msg="Initializing buildkit" Jan 23 18:32:00.382361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 18:32:00.384337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:32:00.387375 dockerd[1985]: time="2026-01-23T18:32:00.387318512Z" level=info msg="Completed buildkit initialization" Jan 23 18:32:00.396034 dockerd[1985]: time="2026-01-23T18:32:00.395969182Z" level=info msg="Daemon has completed initialization" Jan 23 18:32:00.396187 dockerd[1985]: time="2026-01-23T18:32:00.396155563Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:32:00.396371 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:32:00.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:00.515945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:32:00.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:00.523030 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:32:00.557475 kubelet[2201]: E0123 18:32:00.557267 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:32:00.559677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:32:00.559974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:32:00.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:32:00.560442 systemd[1]: kubelet.service: Consumed 140ms CPU time, 110.2M memory peak. Jan 23 18:32:01.912274 containerd[1695]: time="2026-01-23T18:32:01.912173650Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 18:32:02.755732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436780874.mount: Deactivated successfully. Jan 23 18:32:03.598071 containerd[1695]: time="2026-01-23T18:32:03.598022931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:03.599484 containerd[1695]: time="2026-01-23T18:32:03.599434390Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25399329" Jan 23 18:32:03.600907 containerd[1695]: time="2026-01-23T18:32:03.600872537Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:03.604040 containerd[1695]: time="2026-01-23T18:32:03.604006054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:03.604763 containerd[1695]: time="2026-01-23T18:32:03.604605971Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.692371059s" Jan 23 18:32:03.604763 containerd[1695]: time="2026-01-23T18:32:03.604637675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 18:32:03.605061 containerd[1695]: time="2026-01-23T18:32:03.605027060Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 18:32:04.741879 containerd[1695]: time="2026-01-23T18:32:04.741837776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:04.743433 containerd[1695]: time="2026-01-23T18:32:04.743228688Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 23 18:32:04.744892 containerd[1695]: time="2026-01-23T18:32:04.744872245Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:04.754890 containerd[1695]: time="2026-01-23T18:32:04.754861384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:04.755785 containerd[1695]: time="2026-01-23T18:32:04.755650248Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.150601099s" Jan 23 18:32:04.755785 containerd[1695]: time="2026-01-23T18:32:04.755676022Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 18:32:04.756309 containerd[1695]: time="2026-01-23T18:32:04.756177861Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 18:32:05.711637 containerd[1695]: time="2026-01-23T18:32:05.710984969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:05.712415 containerd[1695]: time="2026-01-23T18:32:05.712396875Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 23 18:32:05.714135 containerd[1695]: time="2026-01-23T18:32:05.714119293Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:05.718646 containerd[1695]: time="2026-01-23T18:32:05.718625970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:05.719227 containerd[1695]: time="2026-01-23T18:32:05.719210384Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 962.826697ms" Jan 23 18:32:05.719294 containerd[1695]: time="2026-01-23T18:32:05.719284161Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 18:32:05.719784 containerd[1695]: time="2026-01-23T18:32:05.719758673Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 18:32:06.659633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287785958.mount: Deactivated successfully. Jan 23 18:32:06.916934 containerd[1695]: time="2026-01-23T18:32:06.916483447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:06.918792 containerd[1695]: time="2026-01-23T18:32:06.918627341Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Jan 23 18:32:06.923566 containerd[1695]: time="2026-01-23T18:32:06.923538528Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:06.929202 containerd[1695]: time="2026-01-23T18:32:06.928667901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:06.929202 containerd[1695]: time="2026-01-23T18:32:06.929075649Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.209189947s" Jan 23 18:32:06.929202 containerd[1695]: time="2026-01-23T18:32:06.929105552Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 18:32:06.929645 containerd[1695]: time="2026-01-23T18:32:06.929632044Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 18:32:07.707135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800716292.mount: Deactivated successfully. Jan 23 18:32:08.443236 containerd[1695]: time="2026-01-23T18:32:08.443191568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:08.444712 containerd[1695]: time="2026-01-23T18:32:08.444533035Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568511" Jan 23 18:32:08.446525 containerd[1695]: time="2026-01-23T18:32:08.446506072Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:08.450199 containerd[1695]: time="2026-01-23T18:32:08.450169174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:08.451029 containerd[1695]: time="2026-01-23T18:32:08.451005228Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.521250601s" Jan 23 18:32:08.451111 containerd[1695]: time="2026-01-23T18:32:08.451098984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 18:32:08.451723 containerd[1695]: time="2026-01-23T18:32:08.451704494Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 18:32:09.096201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198459539.mount: Deactivated successfully. Jan 23 18:32:09.110308 containerd[1695]: time="2026-01-23T18:32:09.110238941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:09.112368 containerd[1695]: time="2026-01-23T18:32:09.112180984Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 23 18:32:09.114259 containerd[1695]: time="2026-01-23T18:32:09.114227870Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:09.116768 containerd[1695]: time="2026-01-23T18:32:09.116737027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:09.118045 containerd[1695]: time="2026-01-23T18:32:09.118007740Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 666.273744ms" Jan 23 18:32:09.118157 containerd[1695]: time="2026-01-23T18:32:09.118139137Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 18:32:09.118677 containerd[1695]: time="2026-01-23T18:32:09.118640629Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 18:32:09.741891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1465865049.mount: Deactivated successfully. Jan 23 18:32:10.632532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 18:32:10.635105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:32:10.782994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:32:10.787085 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 23 18:32:10.787134 kernel: audit: type=1130 audit(1769193130.782:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:10.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:10.791364 (kubelet)[2405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:32:10.835855 kubelet[2405]: E0123 18:32:10.834099 2405 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:32:10.838153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:32:10.838285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:32:10.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:32:10.839838 systemd[1]: kubelet.service: Consumed 149ms CPU time, 109.7M memory peak. Jan 23 18:32:10.842879 kernel: audit: type=1131 audit(1769193130.838:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:32:11.572834 containerd[1695]: time="2026-01-23T18:32:11.572754806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:11.574844 containerd[1695]: time="2026-01-23T18:32:11.574655655Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72348001" Jan 23 18:32:11.577149 containerd[1695]: time="2026-01-23T18:32:11.577106896Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:11.593831 containerd[1695]: time="2026-01-23T18:32:11.593741374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:11.594830 containerd[1695]: time="2026-01-23T18:32:11.594716368Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.476039019s" Jan 23 18:32:11.594830 containerd[1695]: time="2026-01-23T18:32:11.594749175Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 18:32:12.651337 update_engine[1660]: I20260123 18:32:12.649231 1660 update_attempter.cc:509] Updating boot flags... Jan 23 18:32:16.029675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:32:16.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:16.030946 systemd[1]: kubelet.service: Consumed 149ms CPU time, 109.7M memory peak. Jan 23 18:32:16.034860 kernel: audit: type=1130 audit(1769193136.030:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:16.035011 kernel: audit: type=1131 audit(1769193136.030:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:16.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:16.040436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:32:16.084165 systemd[1]: Reload requested from client PID 2460 ('systemctl') (unit session-10.scope)... Jan 23 18:32:16.084339 systemd[1]: Reloading... Jan 23 18:32:16.207840 zram_generator::config[2503]: No configuration found. Jan 23 18:32:16.435179 systemd[1]: Reloading finished in 350 ms. Jan 23 18:32:16.471197 kernel: audit: type=1334 audit(1769193136.465:282): prog-id=63 op=LOAD Jan 23 18:32:16.471292 kernel: audit: type=1334 audit(1769193136.465:283): prog-id=51 op=UNLOAD Jan 23 18:32:16.465000 audit: BPF prog-id=63 op=LOAD Jan 23 18:32:16.465000 audit: BPF prog-id=51 op=UNLOAD Jan 23 18:32:16.474095 kernel: audit: type=1334 audit(1769193136.466:284): prog-id=64 op=LOAD Jan 23 18:32:16.466000 audit: BPF prog-id=64 op=LOAD Jan 23 18:32:16.476143 kernel: audit: type=1334 audit(1769193136.466:285): prog-id=48 op=UNLOAD Jan 23 18:32:16.466000 audit: BPF prog-id=48 op=UNLOAD Jan 23 18:32:16.466000 audit: BPF prog-id=65 op=LOAD Jan 23 18:32:16.466000 audit: BPF prog-id=66 op=LOAD Jan 23 18:32:16.479317 kernel: audit: type=1334 audit(1769193136.466:286): prog-id=65 op=LOAD Jan 23 18:32:16.479354 kernel: audit: type=1334 audit(1769193136.466:287): prog-id=66 op=LOAD Jan 23 18:32:16.479374 kernel: audit: type=1334 audit(1769193136.466:288): prog-id=49 op=UNLOAD Jan 23 18:32:16.466000 audit: BPF prog-id=49 op=UNLOAD Jan 23 18:32:16.466000 audit: BPF prog-id=50 op=UNLOAD Jan 23 18:32:16.481007 kernel: audit: type=1334 audit(1769193136.466:289): prog-id=50 op=UNLOAD Jan 23 18:32:16.468000 audit: BPF prog-id=67 op=LOAD Jan 23 18:32:16.468000 audit: BPF prog-id=58 op=UNLOAD Jan 23 18:32:16.469000 audit: BPF prog-id=68 op=LOAD Jan 23 18:32:16.469000 audit: BPF prog-id=69 op=LOAD Jan 23 18:32:16.469000 audit: BPF prog-id=46 op=UNLOAD Jan 23 18:32:16.469000 audit: BPF prog-id=47 op=UNLOAD Jan 23 18:32:16.471000 audit: BPF prog-id=70 op=LOAD Jan 23 18:32:16.471000 audit: BPF prog-id=55 op=UNLOAD Jan 23 18:32:16.471000 audit: BPF prog-id=71 op=LOAD Jan 23 18:32:16.471000 audit: BPF prog-id=72 op=LOAD Jan 23 18:32:16.471000 audit: BPF prog-id=56 op=UNLOAD Jan 23 18:32:16.471000 audit: BPF prog-id=57 op=UNLOAD Jan 23 18:32:16.473000 audit: BPF prog-id=73 op=LOAD Jan 23 18:32:16.473000 audit: BPF prog-id=59 op=UNLOAD Jan 23 18:32:16.474000 audit: BPF prog-id=74 op=LOAD Jan 23 18:32:16.474000 audit: BPF prog-id=60 op=UNLOAD Jan 23 18:32:16.474000 audit: BPF prog-id=75 op=LOAD Jan 23 18:32:16.474000 audit: BPF prog-id=76 op=LOAD Jan 23 18:32:16.474000 audit: BPF prog-id=61 op=UNLOAD Jan 23 18:32:16.474000 audit: BPF prog-id=62 op=UNLOAD Jan 23 18:32:16.475000 audit: BPF prog-id=77 op=LOAD Jan 23 18:32:16.475000 audit: BPF prog-id=52 op=UNLOAD Jan 23 18:32:16.475000 audit: BPF prog-id=78 op=LOAD Jan 23 18:32:16.475000 audit: BPF prog-id=79 op=LOAD Jan 23 18:32:16.475000 audit: BPF prog-id=53 op=UNLOAD Jan 23 18:32:16.475000 audit: BPF prog-id=54 op=UNLOAD Jan 23 18:32:16.476000 audit: BPF prog-id=80 op=LOAD Jan 23 18:32:16.480000 audit: BPF prog-id=43 op=UNLOAD Jan 23 18:32:16.481000 audit: BPF prog-id=81 op=LOAD Jan 23 18:32:16.481000 audit: BPF prog-id=82 op=LOAD Jan 23 18:32:16.481000 audit: BPF prog-id=44 op=UNLOAD Jan 23 18:32:16.481000 audit: BPF prog-id=45 op=UNLOAD Jan 23 18:32:16.497233 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:32:16.497302 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:32:16.497564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:32:16.497616 systemd[1]: kubelet.service: Consumed 120ms CPU time, 98.6M memory peak. Jan 23 18:32:16.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:32:16.499011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:32:16.636981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:32:16.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:16.648491 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:32:16.695682 kubelet[2559]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:32:16.695682 kubelet[2559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:32:16.696762 kubelet[2559]: I0123 18:32:16.695596 2559 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:32:17.615329 kubelet[2559]: I0123 18:32:17.615281 2559 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 18:32:17.615329 kubelet[2559]: I0123 18:32:17.615314 2559 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:32:17.615329 kubelet[2559]: I0123 18:32:17.615339 2559 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 18:32:17.615329 kubelet[2559]: I0123 18:32:17.615345 2559 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:32:17.615703 kubelet[2559]: I0123 18:32:17.615693 2559 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:32:17.628493 kubelet[2559]: E0123 18:32:17.628453 2559 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.6.238:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.6.238:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:32:17.629154 kubelet[2559]: I0123 18:32:17.629136 2559 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:32:17.637906 kubelet[2559]: I0123 18:32:17.637887 2559 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:32:17.640961 kubelet[2559]: I0123 18:32:17.640947 2559 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 18:32:17.641262 kubelet[2559]: I0123 18:32:17.641246 2559 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:32:17.641455 kubelet[2559]: I0123 18:32:17.641314 2559 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-1-0-1-5b0cac0ed6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:32:17.641585 kubelet[2559]: I0123 18:32:17.641579 2559 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:32:17.641621 kubelet[2559]: I0123 18:32:17.641617 2559 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 18:32:17.641750 kubelet[2559]: I0123 18:32:17.641742 2559 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 18:32:17.648835 kubelet[2559]: I0123 18:32:17.648804 2559 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:32:17.649138 kubelet[2559]: I0123 18:32:17.649129 2559 kubelet.go:475] "Attempting to sync node with API server" Jan 23 18:32:17.649544 kubelet[2559]: I0123 18:32:17.649533 2559 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:32:17.649621 kubelet[2559]: I0123 18:32:17.649616 2559 kubelet.go:387] "Adding apiserver pod source" Jan 23 18:32:17.649671 kubelet[2559]: I0123 18:32:17.649666 2559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:32:17.650183 kubelet[2559]: E0123 18:32:17.650160 2559 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.6.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-1-0-1-5b0cac0ed6&limit=500&resourceVersion=0\": dial tcp 10.0.6.238:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:32:17.656904 kubelet[2559]: E0123 18:32:17.656868 2559 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.6.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.6.238:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:32:17.661257 kubelet[2559]: I0123 18:32:17.661240 2559 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:32:17.661875 kubelet[2559]: I0123 18:32:17.661862 2559 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:32:17.661923 kubelet[2559]: I0123 18:32:17.661909 2559 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 18:32:17.661975 kubelet[2559]: W0123 18:32:17.661964 2559 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:32:17.667709 kubelet[2559]: I0123 18:32:17.667579 2559 server.go:1262] "Started kubelet" Jan 23 18:32:17.668676 kubelet[2559]: I0123 18:32:17.668659 2559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:32:17.675000 audit[2573]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:17.675000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffde7632970 a2=0 a3=0 items=0 ppid=2559 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.675000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:32:17.677000 audit[2574]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:17.677000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc44b31670 a2=0 a3=0 items=0 ppid=2559 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 18:32:17.679205 kubelet[2559]: I0123 18:32:17.679164 2559 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:32:17.692842 kubelet[2559]: E0123 18:32:17.677581 2559 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.6.238:6443/api/v1/namespaces/default/events\": dial tcp 10.0.6.238:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-1-0-1-5b0cac0ed6.188d6fc8fdbb8ce0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-1-0-1-5b0cac0ed6,UID:ci-4547-1-0-1-5b0cac0ed6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-1-0-1-5b0cac0ed6,},FirstTimestamp:2026-01-23 18:32:17.66754224 +0000 UTC m=+1.014614948,LastTimestamp:2026-01-23 18:32:17.66754224 +0000 UTC m=+1.014614948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-1-0-1-5b0cac0ed6,}" Jan 23 18:32:17.693038 kubelet[2559]: I0123 18:32:17.693009 2559 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 18:32:17.693165 kubelet[2559]: I0123 18:32:17.693134 2559 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:32:17.693202 kubelet[2559]: I0123 18:32:17.693186 2559 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 18:32:17.693489 kubelet[2559]: I0123 18:32:17.693470 2559 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:32:17.695844 kubelet[2559]: I0123 18:32:17.695809 2559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:32:17.697062 kubelet[2559]: E0123 18:32:17.697040 2559 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547-1-0-1-5b0cac0ed6\" not found" Jan 23 18:32:17.698535 kubelet[2559]: I0123 18:32:17.698510 2559 server.go:310] "Adding debug handlers to kubelet server" Jan 23 18:32:17.698000 audit[2578]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:17.698000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd4b15cdc0 a2=0 a3=0 items=0 ppid=2559 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:32:17.700572 kubelet[2559]: I0123 18:32:17.698612 2559 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:32:17.700572 kubelet[2559]: I0123 18:32:17.698659 2559 reconciler.go:29] "Reconciler: start to sync state" Jan 23 18:32:17.701248 kubelet[2559]: I0123 18:32:17.701228 2559 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:32:17.701326 kubelet[2559]: I0123 18:32:17.701310 2559 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:32:17.702597 kubelet[2559]: E0123 18:32:17.702573 2559 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.6.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.6.238:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:32:17.702750 kubelet[2559]: I0123 18:32:17.702716 2559 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:32:17.703593 kubelet[2559]: E0123 18:32:17.703391 2559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.6.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-1-0-1-5b0cac0ed6?timeout=10s\": dial tcp 10.0.6.238:6443: connect: connection refused" interval="200ms" Jan 23 18:32:17.703000 audit[2581]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:17.703000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcc2beace0 a2=0 a3=0 items=0 ppid=2559 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:32:17.705661 kubelet[2559]: E0123 18:32:17.705580 2559 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:32:17.720423 kubelet[2559]: I0123 18:32:17.720346 2559 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:32:17.720423 kubelet[2559]: I0123 18:32:17.720391 2559 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:32:17.720423 kubelet[2559]: I0123 18:32:17.720407 2559 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:32:17.720000 audit[2586]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:17.720000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffdfd066360 a2=0 a3=0 items=0 ppid=2559 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 23 18:32:17.722112 kubelet[2559]: I0123 18:32:17.722089 2559 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 18:32:17.722000 audit[2590]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:17.722000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff02e431f0 a2=0 a3=0 items=0 ppid=2559 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 18:32:17.723000 audit[2589]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:17.723000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffda1bd34d0 a2=0 a3=0 items=0 ppid=2559 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.723000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:32:17.724182 kubelet[2559]: I0123 18:32:17.724170 2559 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 18:32:17.724253 kubelet[2559]: I0123 18:32:17.724248 2559 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 18:32:17.724297 kubelet[2559]: I0123 18:32:17.724293 2559 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 18:32:17.724361 kubelet[2559]: E0123 18:32:17.724345 2559 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:32:17.725088 kubelet[2559]: E0123 18:32:17.725069 2559 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.6.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.6.238:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:32:17.725365 kubelet[2559]: I0123 18:32:17.725324 2559 policy_none.go:49] "None policy: Start" Jan 23 18:32:17.725365 kubelet[2559]: I0123 18:32:17.725343 2559 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 18:32:17.725445 kubelet[2559]: I0123 18:32:17.725353 2559 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 18:32:17.725000 audit[2592]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:17.725000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd59cbea0 a2=0 a3=0 items=0 ppid=2559 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 18:32:17.725000 audit[2593]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:17.725000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8c9ac780 a2=0 a3=0 items=0 ppid=2559 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.725000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 18:32:17.726000 audit[2594]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:17.726000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde50bfd90 a2=0 a3=0 items=0 ppid=2559 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 18:32:17.728131 kubelet[2559]: I0123 18:32:17.728117 2559 policy_none.go:47] "Start" Jan 23 18:32:17.727000 audit[2595]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2595 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:17.727000 audit[2595]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffcf377120 a2=0 a3=0 items=0 ppid=2559 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 18:32:17.728000 audit[2596]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:17.728000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc528f3540 a2=0 a3=0 items=0 ppid=2559 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:17.728000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 18:32:17.732414 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:32:17.739608 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:32:17.742521 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:32:17.749523 kubelet[2559]: E0123 18:32:17.749503 2559 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:32:17.750111 kubelet[2559]: I0123 18:32:17.749675 2559 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:32:17.750111 kubelet[2559]: I0123 18:32:17.749686 2559 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:32:17.750111 kubelet[2559]: I0123 18:32:17.750001 2559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:32:17.751777 kubelet[2559]: E0123 18:32:17.751761 2559 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:32:17.751848 kubelet[2559]: E0123 18:32:17.751795 2559 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547-1-0-1-5b0cac0ed6\" not found" Jan 23 18:32:17.837953 systemd[1]: Created slice kubepods-burstable-pod267fdf13591e1c8f25d11ea3652c4c45.slice - libcontainer container kubepods-burstable-pod267fdf13591e1c8f25d11ea3652c4c45.slice. Jan 23 18:32:17.851676 kubelet[2559]: I0123 18:32:17.851632 2559 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:17.852313 kubelet[2559]: E0123 18:32:17.852281 2559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.6.238:6443/api/v1/nodes\": dial tcp 10.0.6.238:6443: connect: connection refused" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:17.853885 kubelet[2559]: E0123 18:32:17.853713 2559 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-1-5b0cac0ed6\" not found" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:17.863146 systemd[1]: Created slice kubepods-burstable-pod9a09cbaef23579d43819a06118324279.slice - libcontainer container kubepods-burstable-pod9a09cbaef23579d43819a06118324279.slice. Jan 23 18:32:17.875517 kubelet[2559]: E0123 18:32:17.874732 2559 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-1-5b0cac0ed6\" not found" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:17.879858 systemd[1]: Created slice kubepods-burstable-podde6a4a660a81d7cd7b6d81442545c79c.slice - libcontainer container kubepods-burstable-podde6a4a660a81d7cd7b6d81442545c79c.slice. Jan 23 18:32:17.883238 kubelet[2559]: E0123 18:32:17.883174 2559 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-1-5b0cac0ed6\" not found" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:17.904171 kubelet[2559]: E0123 18:32:17.904130 2559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.6.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-1-0-1-5b0cac0ed6?timeout=10s\": dial tcp 10.0.6.238:6443: connect: connection refused" interval="400ms" Jan 23 18:32:18.002608 kubelet[2559]: I0123 18:32:18.002095 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a09cbaef23579d43819a06118324279-k8s-certs\") pod \"kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"9a09cbaef23579d43819a06118324279\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.002784 kubelet[2559]: I0123 18:32:18.002725 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a09cbaef23579d43819a06118324279-kubeconfig\") pod \"kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"9a09cbaef23579d43819a06118324279\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.002873 kubelet[2559]: I0123 18:32:18.002847 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a09cbaef23579d43819a06118324279-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"9a09cbaef23579d43819a06118324279\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.002981 kubelet[2559]: I0123 18:32:18.002909 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de6a4a660a81d7cd7b6d81442545c79c-kubeconfig\") pod \"kube-scheduler-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"de6a4a660a81d7cd7b6d81442545c79c\") " pod="kube-system/kube-scheduler-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.003665 kubelet[2559]: I0123 18:32:18.003016 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/267fdf13591e1c8f25d11ea3652c4c45-ca-certs\") pod \"kube-apiserver-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"267fdf13591e1c8f25d11ea3652c4c45\") " pod="kube-system/kube-apiserver-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.003665 kubelet[2559]: I0123 18:32:18.003107 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/267fdf13591e1c8f25d11ea3652c4c45-k8s-certs\") pod \"kube-apiserver-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"267fdf13591e1c8f25d11ea3652c4c45\") " pod="kube-system/kube-apiserver-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.003665 kubelet[2559]: I0123 18:32:18.003246 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/267fdf13591e1c8f25d11ea3652c4c45-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"267fdf13591e1c8f25d11ea3652c4c45\") " pod="kube-system/kube-apiserver-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.003665 kubelet[2559]: I0123 18:32:18.003309 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a09cbaef23579d43819a06118324279-ca-certs\") pod \"kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"9a09cbaef23579d43819a06118324279\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.003665 kubelet[2559]: I0123 18:32:18.003602 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a09cbaef23579d43819a06118324279-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"9a09cbaef23579d43819a06118324279\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.056249 kubelet[2559]: I0123 18:32:18.056205 2559 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.057180 kubelet[2559]: E0123 18:32:18.057128 2559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.6.238:6443/api/v1/nodes\": dial tcp 10.0.6.238:6443: connect: connection refused" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.160246 containerd[1695]: time="2026-01-23T18:32:18.159922923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-1-0-1-5b0cac0ed6,Uid:267fdf13591e1c8f25d11ea3652c4c45,Namespace:kube-system,Attempt:0,}" Jan 23 18:32:18.179692 containerd[1695]: time="2026-01-23T18:32:18.179498098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6,Uid:9a09cbaef23579d43819a06118324279,Namespace:kube-system,Attempt:0,}" Jan 23 18:32:18.186505 containerd[1695]: time="2026-01-23T18:32:18.186446524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-1-0-1-5b0cac0ed6,Uid:de6a4a660a81d7cd7b6d81442545c79c,Namespace:kube-system,Attempt:0,}" Jan 23 18:32:18.305455 kubelet[2559]: E0123 18:32:18.305372 2559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.6.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-1-0-1-5b0cac0ed6?timeout=10s\": dial tcp 10.0.6.238:6443: connect: connection refused" interval="800ms" Jan 23 18:32:18.461805 kubelet[2559]: I0123 18:32:18.461526 2559 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.462928 kubelet[2559]: E0123 18:32:18.462870 2559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.6.238:6443/api/v1/nodes\": dial tcp 10.0.6.238:6443: connect: connection refused" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:18.697312 kubelet[2559]: E0123 18:32:18.697217 2559 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.6.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.6.238:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:32:18.724563 kubelet[2559]: E0123 18:32:18.724338 2559 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.6.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-1-0-1-5b0cac0ed6&limit=500&resourceVersion=0\": dial tcp 10.0.6.238:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:32:18.819277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357892326.mount: Deactivated successfully. Jan 23 18:32:18.835590 containerd[1695]: time="2026-01-23T18:32:18.835527171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:32:18.838956 containerd[1695]: time="2026-01-23T18:32:18.838916278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:32:18.858089 containerd[1695]: time="2026-01-23T18:32:18.858035682Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:32:18.859504 containerd[1695]: time="2026-01-23T18:32:18.859468531Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:32:18.862371 containerd[1695]: time="2026-01-23T18:32:18.862343118Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:32:18.864362 containerd[1695]: time="2026-01-23T18:32:18.863900632Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:32:18.867189 containerd[1695]: time="2026-01-23T18:32:18.867166720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:32:18.867725 containerd[1695]: time="2026-01-23T18:32:18.867697604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 702.650503ms" Jan 23 18:32:18.869228 containerd[1695]: time="2026-01-23T18:32:18.869208235Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:32:18.874535 containerd[1695]: time="2026-01-23T18:32:18.874501748Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 690.725522ms" Jan 23 18:32:18.875682 containerd[1695]: time="2026-01-23T18:32:18.875642412Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 683.906969ms" Jan 23 18:32:18.915055 kubelet[2559]: E0123 18:32:18.915012 2559 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.6.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.6.238:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:32:18.944609 containerd[1695]: time="2026-01-23T18:32:18.944524054Z" level=info msg="connecting to shim f82832257feb3c8ee65326002be8cd6ff9e21ad4686064c3da1311cf5f402800" address="unix:///run/containerd/s/517c923cada8d2a4518c99ee7c7f8d8403d56b8a7dc75e89263d60f0c89d51fa" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:32:18.947139 containerd[1695]: time="2026-01-23T18:32:18.947078054Z" level=info msg="connecting to shim 3311eb5af26d770a2e6dcf222e24b0bec260c2e597b6ee35e2af0ee57ec6fd98" address="unix:///run/containerd/s/8ef8ec14e9c041777ab837e916300a42a88ac9e7cdc6b600536633522c894703" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:32:18.962348 containerd[1695]: time="2026-01-23T18:32:18.962261115Z" level=info msg="connecting to shim 63e2fa06b0a1ef29837cc72eaa25e48c15ecb1a872d1213409a52ef99aabafb6" address="unix:///run/containerd/s/a741b7efbe7a14cebcf779a973836330d98ff938b83fcaa5caafa00feabbe48f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:32:18.963588 kubelet[2559]: E0123 18:32:18.963493 2559 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.6.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.6.238:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:32:18.979026 systemd[1]: Started cri-containerd-f82832257feb3c8ee65326002be8cd6ff9e21ad4686064c3da1311cf5f402800.scope - libcontainer container f82832257feb3c8ee65326002be8cd6ff9e21ad4686064c3da1311cf5f402800. Jan 23 18:32:18.996000 audit: BPF prog-id=83 op=LOAD Jan 23 18:32:19.004038 systemd[1]: Started cri-containerd-3311eb5af26d770a2e6dcf222e24b0bec260c2e597b6ee35e2af0ee57ec6fd98.scope - libcontainer container 3311eb5af26d770a2e6dcf222e24b0bec260c2e597b6ee35e2af0ee57ec6fd98. Jan 23 18:32:19.003000 audit: BPF prog-id=84 op=LOAD Jan 23 18:32:19.003000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2619 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638323833323235376665623363386565363533323630303262653863 Jan 23 18:32:19.003000 audit: BPF prog-id=84 op=UNLOAD Jan 23 18:32:19.003000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638323833323235376665623363386565363533323630303262653863 Jan 23 18:32:19.004000 audit: BPF prog-id=85 op=LOAD Jan 23 18:32:19.004000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2619 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638323833323235376665623363386565363533323630303262653863 Jan 23 18:32:19.004000 audit: BPF prog-id=86 op=LOAD Jan 23 18:32:19.004000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2619 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638323833323235376665623363386565363533323630303262653863 Jan 23 18:32:19.004000 audit: BPF prog-id=86 op=UNLOAD Jan 23 18:32:19.004000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638323833323235376665623363386565363533323630303262653863 Jan 23 18:32:19.004000 audit: BPF prog-id=85 op=UNLOAD Jan 23 18:32:19.004000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638323833323235376665623363386565363533323630303262653863 Jan 23 18:32:19.004000 audit: BPF prog-id=87 op=LOAD Jan 23 18:32:19.004000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2619 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638323833323235376665623363386565363533323630303262653863 Jan 23 18:32:19.010046 systemd[1]: Started cri-containerd-63e2fa06b0a1ef29837cc72eaa25e48c15ecb1a872d1213409a52ef99aabafb6.scope - libcontainer container 63e2fa06b0a1ef29837cc72eaa25e48c15ecb1a872d1213409a52ef99aabafb6. Jan 23 18:32:19.027000 audit: BPF prog-id=88 op=LOAD Jan 23 18:32:19.028000 audit: BPF prog-id=89 op=LOAD Jan 23 18:32:19.028000 audit[2656]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2610 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333313165623561663236643737306132653664636632323265323462 Jan 23 18:32:19.028000 audit: BPF prog-id=89 op=UNLOAD Jan 23 18:32:19.028000 audit[2656]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333313165623561663236643737306132653664636632323265323462 Jan 23 18:32:19.029000 audit: BPF prog-id=90 op=LOAD Jan 23 18:32:19.029000 audit[2656]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2610 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333313165623561663236643737306132653664636632323265323462 Jan 23 18:32:19.029000 audit: BPF prog-id=91 op=LOAD Jan 23 18:32:19.029000 audit[2656]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2610 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333313165623561663236643737306132653664636632323265323462 Jan 23 18:32:19.029000 audit: BPF prog-id=91 op=UNLOAD Jan 23 18:32:19.029000 audit[2656]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333313165623561663236643737306132653664636632323265323462 Jan 23 18:32:19.029000 audit: BPF prog-id=90 op=UNLOAD Jan 23 18:32:19.029000 audit[2656]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333313165623561663236643737306132653664636632323265323462 Jan 23 18:32:19.029000 audit: BPF prog-id=92 op=LOAD Jan 23 18:32:19.029000 audit[2656]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2610 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333313165623561663236643737306132653664636632323265323462 Jan 23 18:32:19.030000 audit: BPF prog-id=93 op=LOAD Jan 23 18:32:19.031000 audit: BPF prog-id=94 op=LOAD Jan 23 18:32:19.031000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2652 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653266613036623061316566323938333763633732656161323565 Jan 23 18:32:19.031000 audit: BPF prog-id=94 op=UNLOAD Jan 23 18:32:19.031000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653266613036623061316566323938333763633732656161323565 Jan 23 18:32:19.031000 audit: BPF prog-id=95 op=LOAD Jan 23 18:32:19.031000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2652 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653266613036623061316566323938333763633732656161323565 Jan 23 18:32:19.031000 audit: BPF prog-id=96 op=LOAD Jan 23 18:32:19.031000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2652 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653266613036623061316566323938333763633732656161323565 Jan 23 18:32:19.031000 audit: BPF prog-id=96 op=UNLOAD Jan 23 18:32:19.031000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653266613036623061316566323938333763633732656161323565 Jan 23 18:32:19.031000 audit: BPF prog-id=95 op=UNLOAD Jan 23 18:32:19.031000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653266613036623061316566323938333763633732656161323565 Jan 23 18:32:19.031000 audit: BPF prog-id=97 op=LOAD Jan 23 18:32:19.031000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2652 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653266613036623061316566323938333763633732656161323565 Jan 23 18:32:19.079373 containerd[1695]: time="2026-01-23T18:32:19.079241506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6,Uid:9a09cbaef23579d43819a06118324279,Namespace:kube-system,Attempt:0,} returns sandbox id \"f82832257feb3c8ee65326002be8cd6ff9e21ad4686064c3da1311cf5f402800\"" Jan 23 18:32:19.087346 containerd[1695]: time="2026-01-23T18:32:19.087290600Z" level=info msg="CreateContainer within sandbox \"f82832257feb3c8ee65326002be8cd6ff9e21ad4686064c3da1311cf5f402800\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:32:19.088999 containerd[1695]: time="2026-01-23T18:32:19.088935517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-1-0-1-5b0cac0ed6,Uid:de6a4a660a81d7cd7b6d81442545c79c,Namespace:kube-system,Attempt:0,} returns sandbox id \"63e2fa06b0a1ef29837cc72eaa25e48c15ecb1a872d1213409a52ef99aabafb6\"" Jan 23 18:32:19.094766 containerd[1695]: time="2026-01-23T18:32:19.094740563Z" level=info msg="CreateContainer within sandbox \"63e2fa06b0a1ef29837cc72eaa25e48c15ecb1a872d1213409a52ef99aabafb6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:32:19.100515 containerd[1695]: time="2026-01-23T18:32:19.100430015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-1-0-1-5b0cac0ed6,Uid:267fdf13591e1c8f25d11ea3652c4c45,Namespace:kube-system,Attempt:0,} returns sandbox id \"3311eb5af26d770a2e6dcf222e24b0bec260c2e597b6ee35e2af0ee57ec6fd98\"" Jan 23 18:32:19.105397 containerd[1695]: time="2026-01-23T18:32:19.105356393Z" level=info msg="CreateContainer within sandbox \"3311eb5af26d770a2e6dcf222e24b0bec260c2e597b6ee35e2af0ee57ec6fd98\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:32:19.105683 containerd[1695]: time="2026-01-23T18:32:19.105665366Z" level=info msg="Container cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:32:19.107059 kubelet[2559]: E0123 18:32:19.106989 2559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.6.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-1-0-1-5b0cac0ed6?timeout=10s\": dial tcp 10.0.6.238:6443: connect: connection refused" interval="1.6s" Jan 23 18:32:19.120363 containerd[1695]: time="2026-01-23T18:32:19.120233800Z" level=info msg="CreateContainer within sandbox \"f82832257feb3c8ee65326002be8cd6ff9e21ad4686064c3da1311cf5f402800\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2\"" Jan 23 18:32:19.121606 containerd[1695]: time="2026-01-23T18:32:19.121003282Z" level=info msg="StartContainer for \"cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2\"" Jan 23 18:32:19.123194 containerd[1695]: time="2026-01-23T18:32:19.123177024Z" level=info msg="Container 3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:32:19.123763 containerd[1695]: time="2026-01-23T18:32:19.123660428Z" level=info msg="connecting to shim cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2" address="unix:///run/containerd/s/517c923cada8d2a4518c99ee7c7f8d8403d56b8a7dc75e89263d60f0c89d51fa" protocol=ttrpc version=3 Jan 23 18:32:19.130384 containerd[1695]: time="2026-01-23T18:32:19.130349409Z" level=info msg="Container 8e100560d94b23a9d001c317f4ea2329fbdc9a9992181ea34b6f40072d2973bb: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:32:19.140063 containerd[1695]: time="2026-01-23T18:32:19.139992244Z" level=info msg="CreateContainer within sandbox \"63e2fa06b0a1ef29837cc72eaa25e48c15ecb1a872d1213409a52ef99aabafb6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3\"" Jan 23 18:32:19.140593 containerd[1695]: time="2026-01-23T18:32:19.140550971Z" level=info msg="StartContainer for \"3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3\"" Jan 23 18:32:19.141511 containerd[1695]: time="2026-01-23T18:32:19.141473652Z" level=info msg="connecting to shim 3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3" address="unix:///run/containerd/s/a741b7efbe7a14cebcf779a973836330d98ff938b83fcaa5caafa00feabbe48f" protocol=ttrpc version=3 Jan 23 18:32:19.144793 systemd[1]: Started cri-containerd-cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2.scope - libcontainer container cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2. Jan 23 18:32:19.164511 containerd[1695]: time="2026-01-23T18:32:19.164453380Z" level=info msg="CreateContainer within sandbox \"3311eb5af26d770a2e6dcf222e24b0bec260c2e597b6ee35e2af0ee57ec6fd98\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8e100560d94b23a9d001c317f4ea2329fbdc9a9992181ea34b6f40072d2973bb\"" Jan 23 18:32:19.166573 containerd[1695]: time="2026-01-23T18:32:19.166542139Z" level=info msg="StartContainer for \"8e100560d94b23a9d001c317f4ea2329fbdc9a9992181ea34b6f40072d2973bb\"" Jan 23 18:32:19.167581 containerd[1695]: time="2026-01-23T18:32:19.167555029Z" level=info msg="connecting to shim 8e100560d94b23a9d001c317f4ea2329fbdc9a9992181ea34b6f40072d2973bb" address="unix:///run/containerd/s/8ef8ec14e9c041777ab837e916300a42a88ac9e7cdc6b600536633522c894703" protocol=ttrpc version=3 Jan 23 18:32:19.172070 systemd[1]: Started cri-containerd-3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3.scope - libcontainer container 3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3. Jan 23 18:32:19.173000 audit: BPF prog-id=98 op=LOAD Jan 23 18:32:19.174000 audit: BPF prog-id=99 op=LOAD Jan 23 18:32:19.174000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2619 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362313130303630313063613231326331653462353138356232656366 Jan 23 18:32:19.174000 audit: BPF prog-id=99 op=UNLOAD Jan 23 18:32:19.174000 audit[2742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362313130303630313063613231326331653462353138356232656366 Jan 23 18:32:19.174000 audit: BPF prog-id=100 op=LOAD Jan 23 18:32:19.174000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2619 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362313130303630313063613231326331653462353138356232656366 Jan 23 18:32:19.174000 audit: BPF prog-id=101 op=LOAD Jan 23 18:32:19.174000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2619 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362313130303630313063613231326331653462353138356232656366 Jan 23 18:32:19.174000 audit: BPF prog-id=101 op=UNLOAD Jan 23 18:32:19.174000 audit[2742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362313130303630313063613231326331653462353138356232656366 Jan 23 18:32:19.174000 audit: BPF prog-id=100 op=UNLOAD Jan 23 18:32:19.174000 audit[2742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362313130303630313063613231326331653462353138356232656366 Jan 23 18:32:19.174000 audit: BPF prog-id=102 op=LOAD Jan 23 18:32:19.174000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2619 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362313130303630313063613231326331653462353138356232656366 Jan 23 18:32:19.189028 systemd[1]: Started cri-containerd-8e100560d94b23a9d001c317f4ea2329fbdc9a9992181ea34b6f40072d2973bb.scope - libcontainer container 8e100560d94b23a9d001c317f4ea2329fbdc9a9992181ea34b6f40072d2973bb. Jan 23 18:32:19.198000 audit: BPF prog-id=103 op=LOAD Jan 23 18:32:19.199000 audit: BPF prog-id=104 op=LOAD Jan 23 18:32:19.199000 audit[2754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2652 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366353033663162323638663863306638663232383833373262633236 Jan 23 18:32:19.199000 audit: BPF prog-id=104 op=UNLOAD Jan 23 18:32:19.199000 audit[2754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366353033663162323638663863306638663232383833373262633236 Jan 23 18:32:19.199000 audit: BPF prog-id=105 op=LOAD Jan 23 18:32:19.199000 audit[2754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2652 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366353033663162323638663863306638663232383833373262633236 Jan 23 18:32:19.199000 audit: BPF prog-id=106 op=LOAD Jan 23 18:32:19.199000 audit[2754]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2652 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366353033663162323638663863306638663232383833373262633236 Jan 23 18:32:19.199000 audit: BPF prog-id=106 op=UNLOAD Jan 23 18:32:19.199000 audit[2754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366353033663162323638663863306638663232383833373262633236 Jan 23 18:32:19.199000 audit: BPF prog-id=105 op=UNLOAD Jan 23 18:32:19.199000 audit[2754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366353033663162323638663863306638663232383833373262633236 Jan 23 18:32:19.199000 audit: BPF prog-id=107 op=LOAD Jan 23 18:32:19.199000 audit[2754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2652 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366353033663162323638663863306638663232383833373262633236 Jan 23 18:32:19.206000 audit: BPF prog-id=108 op=LOAD Jan 23 18:32:19.207000 audit: BPF prog-id=109 op=LOAD Jan 23 18:32:19.207000 audit[2773]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2610 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313030353630643934623233613964303031633331376634656132 Jan 23 18:32:19.207000 audit: BPF prog-id=109 op=UNLOAD Jan 23 18:32:19.207000 audit[2773]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313030353630643934623233613964303031633331376634656132 Jan 23 18:32:19.207000 audit: BPF prog-id=110 op=LOAD Jan 23 18:32:19.207000 audit[2773]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2610 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313030353630643934623233613964303031633331376634656132 Jan 23 18:32:19.207000 audit: BPF prog-id=111 op=LOAD Jan 23 18:32:19.207000 audit[2773]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2610 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313030353630643934623233613964303031633331376634656132 Jan 23 18:32:19.207000 audit: BPF prog-id=111 op=UNLOAD Jan 23 18:32:19.207000 audit[2773]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313030353630643934623233613964303031633331376634656132 Jan 23 18:32:19.207000 audit: BPF prog-id=110 op=UNLOAD Jan 23 18:32:19.207000 audit[2773]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2610 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313030353630643934623233613964303031633331376634656132 Jan 23 18:32:19.207000 audit: BPF prog-id=112 op=LOAD Jan 23 18:32:19.207000 audit[2773]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2610 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865313030353630643934623233613964303031633331376634656132 Jan 23 18:32:19.240008 containerd[1695]: time="2026-01-23T18:32:19.239916411Z" level=info msg="StartContainer for \"cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2\" returns successfully" Jan 23 18:32:19.264535 containerd[1695]: time="2026-01-23T18:32:19.264456820Z" level=info msg="StartContainer for \"3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3\" returns successfully" Jan 23 18:32:19.267236 kubelet[2559]: I0123 18:32:19.267202 2559 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:19.268744 kubelet[2559]: E0123 18:32:19.268719 2559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.6.238:6443/api/v1/nodes\": dial tcp 10.0.6.238:6443: connect: connection refused" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:19.276314 containerd[1695]: time="2026-01-23T18:32:19.276270666Z" level=info msg="StartContainer for \"8e100560d94b23a9d001c317f4ea2329fbdc9a9992181ea34b6f40072d2973bb\" returns successfully" Jan 23 18:32:19.734777 kubelet[2559]: E0123 18:32:19.734339 2559 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-1-5b0cac0ed6\" not found" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:19.737678 kubelet[2559]: E0123 18:32:19.737635 2559 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-1-5b0cac0ed6\" not found" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:19.740425 kubelet[2559]: E0123 18:32:19.740318 2559 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-1-5b0cac0ed6\" not found" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:20.743871 kubelet[2559]: E0123 18:32:20.742372 2559 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-1-5b0cac0ed6\" not found" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:20.744581 kubelet[2559]: E0123 18:32:20.744475 2559 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-1-0-1-5b0cac0ed6\" not found" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:20.870922 kubelet[2559]: I0123 18:32:20.870899 2559 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:20.922559 kubelet[2559]: E0123 18:32:20.922499 2559 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547-1-0-1-5b0cac0ed6\" not found" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:21.023090 kubelet[2559]: I0123 18:32:21.022782 2559 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:21.023090 kubelet[2559]: E0123 18:32:21.022830 2559 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4547-1-0-1-5b0cac0ed6\": node \"ci-4547-1-0-1-5b0cac0ed6\" not found" Jan 23 18:32:21.032737 kubelet[2559]: E0123 18:32:21.032698 2559 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547-1-0-1-5b0cac0ed6\" not found" Jan 23 18:32:21.097847 kubelet[2559]: I0123 18:32:21.097798 2559 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:21.102929 kubelet[2559]: E0123 18:32:21.102882 2559 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-1-0-1-5b0cac0ed6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:21.102929 kubelet[2559]: I0123 18:32:21.102920 2559 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:21.104499 kubelet[2559]: E0123 18:32:21.104479 2559 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-1-0-1-5b0cac0ed6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:21.104499 kubelet[2559]: I0123 18:32:21.104497 2559 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:21.105917 kubelet[2559]: E0123 18:32:21.105872 2559 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:21.659443 kubelet[2559]: I0123 18:32:21.659365 2559 apiserver.go:52] "Watching apiserver" Jan 23 18:32:21.701574 kubelet[2559]: I0123 18:32:21.701434 2559 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:32:21.742834 kubelet[2559]: I0123 18:32:21.742786 2559 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:21.746622 kubelet[2559]: E0123 18:32:21.746335 2559 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-1-0-1-5b0cac0ed6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.143011 systemd[1]: Reload requested from client PID 2843 ('systemctl') (unit session-10.scope)... Jan 23 18:32:23.143309 systemd[1]: Reloading... Jan 23 18:32:23.222859 zram_generator::config[2898]: No configuration found. Jan 23 18:32:23.414331 systemd[1]: Reloading finished in 270 ms. Jan 23 18:32:23.439932 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:32:23.458725 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:32:23.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:23.459012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:32:23.460012 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 23 18:32:23.460082 kernel: audit: type=1131 audit(1769193143.458:384): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:23.462528 systemd[1]: kubelet.service: Consumed 1.349s CPU time, 124.3M memory peak. Jan 23 18:32:23.465092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:32:23.464000 audit: BPF prog-id=113 op=LOAD Jan 23 18:32:23.469199 kernel: audit: type=1334 audit(1769193143.464:385): prog-id=113 op=LOAD Jan 23 18:32:23.469266 kernel: audit: type=1334 audit(1769193143.464:386): prog-id=70 op=UNLOAD Jan 23 18:32:23.464000 audit: BPF prog-id=70 op=UNLOAD Jan 23 18:32:23.470872 kernel: audit: type=1334 audit(1769193143.466:387): prog-id=114 op=LOAD Jan 23 18:32:23.466000 audit: BPF prog-id=114 op=LOAD Jan 23 18:32:23.472362 kernel: audit: type=1334 audit(1769193143.466:388): prog-id=115 op=LOAD Jan 23 18:32:23.466000 audit: BPF prog-id=115 op=LOAD Jan 23 18:32:23.473888 kernel: audit: type=1334 audit(1769193143.466:389): prog-id=71 op=UNLOAD Jan 23 18:32:23.466000 audit: BPF prog-id=71 op=UNLOAD Jan 23 18:32:23.475280 kernel: audit: type=1334 audit(1769193143.466:390): prog-id=72 op=UNLOAD Jan 23 18:32:23.466000 audit: BPF prog-id=72 op=UNLOAD Jan 23 18:32:23.476639 kernel: audit: type=1334 audit(1769193143.467:391): prog-id=116 op=LOAD Jan 23 18:32:23.467000 audit: BPF prog-id=116 op=LOAD Jan 23 18:32:23.477846 kernel: audit: type=1334 audit(1769193143.467:392): prog-id=64 op=UNLOAD Jan 23 18:32:23.467000 audit: BPF prog-id=64 op=UNLOAD Jan 23 18:32:23.467000 audit: BPF prog-id=117 op=LOAD Jan 23 18:32:23.479276 kernel: audit: type=1334 audit(1769193143.467:393): prog-id=117 op=LOAD Jan 23 18:32:23.467000 audit: BPF prog-id=118 op=LOAD Jan 23 18:32:23.467000 audit: BPF prog-id=65 op=UNLOAD Jan 23 18:32:23.467000 audit: BPF prog-id=66 op=UNLOAD Jan 23 18:32:23.467000 audit: BPF prog-id=119 op=LOAD Jan 23 18:32:23.467000 audit: BPF prog-id=120 op=LOAD Jan 23 18:32:23.467000 audit: BPF prog-id=68 op=UNLOAD Jan 23 18:32:23.467000 audit: BPF prog-id=69 op=UNLOAD Jan 23 18:32:23.468000 audit: BPF prog-id=121 op=LOAD Jan 23 18:32:23.468000 audit: BPF prog-id=80 op=UNLOAD Jan 23 18:32:23.468000 audit: BPF prog-id=122 op=LOAD Jan 23 18:32:23.468000 audit: BPF prog-id=123 op=LOAD Jan 23 18:32:23.468000 audit: BPF prog-id=81 op=UNLOAD Jan 23 18:32:23.468000 audit: BPF prog-id=82 op=UNLOAD Jan 23 18:32:23.469000 audit: BPF prog-id=124 op=LOAD Jan 23 18:32:23.469000 audit: BPF prog-id=67 op=UNLOAD Jan 23 18:32:23.471000 audit: BPF prog-id=125 op=LOAD Jan 23 18:32:23.471000 audit: BPF prog-id=63 op=UNLOAD Jan 23 18:32:23.471000 audit: BPF prog-id=126 op=LOAD Jan 23 18:32:23.471000 audit: BPF prog-id=77 op=UNLOAD Jan 23 18:32:23.472000 audit: BPF prog-id=127 op=LOAD Jan 23 18:32:23.472000 audit: BPF prog-id=128 op=LOAD Jan 23 18:32:23.472000 audit: BPF prog-id=78 op=UNLOAD Jan 23 18:32:23.472000 audit: BPF prog-id=79 op=UNLOAD Jan 23 18:32:23.473000 audit: BPF prog-id=129 op=LOAD Jan 23 18:32:23.473000 audit: BPF prog-id=74 op=UNLOAD Jan 23 18:32:23.473000 audit: BPF prog-id=130 op=LOAD Jan 23 18:32:23.473000 audit: BPF prog-id=131 op=LOAD Jan 23 18:32:23.473000 audit: BPF prog-id=75 op=UNLOAD Jan 23 18:32:23.473000 audit: BPF prog-id=76 op=UNLOAD Jan 23 18:32:23.474000 audit: BPF prog-id=132 op=LOAD Jan 23 18:32:23.474000 audit: BPF prog-id=73 op=UNLOAD Jan 23 18:32:23.604720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:32:23.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:23.611224 (kubelet)[2940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:32:23.645943 kubelet[2940]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:32:23.646283 kubelet[2940]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:32:23.646385 kubelet[2940]: I0123 18:32:23.646366 2940 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:32:23.656047 kubelet[2940]: I0123 18:32:23.656015 2940 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 18:32:23.656047 kubelet[2940]: I0123 18:32:23.656042 2940 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:32:23.656177 kubelet[2940]: I0123 18:32:23.656069 2940 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 18:32:23.656177 kubelet[2940]: I0123 18:32:23.656081 2940 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:32:23.656317 kubelet[2940]: I0123 18:32:23.656299 2940 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:32:23.657754 kubelet[2940]: I0123 18:32:23.657706 2940 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 18:32:23.660528 kubelet[2940]: I0123 18:32:23.660146 2940 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:32:23.663536 kubelet[2940]: I0123 18:32:23.663520 2940 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:32:23.666621 kubelet[2940]: I0123 18:32:23.666563 2940 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 18:32:23.666782 kubelet[2940]: I0123 18:32:23.666761 2940 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:32:23.667160 kubelet[2940]: I0123 18:32:23.666786 2940 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-1-0-1-5b0cac0ed6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:32:23.667160 kubelet[2940]: I0123 18:32:23.666963 2940 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:32:23.667160 kubelet[2940]: I0123 18:32:23.666975 2940 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 18:32:23.667160 kubelet[2940]: I0123 18:32:23.666999 2940 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 18:32:23.667793 kubelet[2940]: I0123 18:32:23.667779 2940 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:32:23.667999 kubelet[2940]: I0123 18:32:23.667952 2940 kubelet.go:475] "Attempting to sync node with API server" Jan 23 18:32:23.667999 kubelet[2940]: I0123 18:32:23.667972 2940 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:32:23.667999 kubelet[2940]: I0123 18:32:23.667993 2940 kubelet.go:387] "Adding apiserver pod source" Jan 23 18:32:23.668403 kubelet[2940]: I0123 18:32:23.668011 2940 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:32:23.682849 kubelet[2940]: I0123 18:32:23.682811 2940 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:32:23.683360 kubelet[2940]: I0123 18:32:23.683350 2940 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:32:23.683455 kubelet[2940]: I0123 18:32:23.683449 2940 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 18:32:23.686173 kubelet[2940]: I0123 18:32:23.686162 2940 server.go:1262] "Started kubelet" Jan 23 18:32:23.687662 kubelet[2940]: I0123 18:32:23.686928 2940 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:32:23.687835 kubelet[2940]: I0123 18:32:23.687802 2940 server.go:310] "Adding debug handlers to kubelet server" Jan 23 18:32:23.688585 kubelet[2940]: I0123 18:32:23.688530 2940 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:32:23.691335 kubelet[2940]: I0123 18:32:23.691304 2940 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:32:23.691397 kubelet[2940]: I0123 18:32:23.691350 2940 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 18:32:23.691496 kubelet[2940]: I0123 18:32:23.691483 2940 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:32:23.692441 kubelet[2940]: I0123 18:32:23.692423 2940 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:32:23.698627 kubelet[2940]: I0123 18:32:23.698611 2940 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 18:32:23.698714 kubelet[2940]: I0123 18:32:23.698702 2940 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:32:23.698918 kubelet[2940]: I0123 18:32:23.698906 2940 reconciler.go:29] "Reconciler: start to sync state" Jan 23 18:32:23.699294 kubelet[2940]: E0123 18:32:23.699277 2940 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:32:23.700239 kubelet[2940]: I0123 18:32:23.699852 2940 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:32:23.700239 kubelet[2940]: I0123 18:32:23.699942 2940 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:32:23.700904 kubelet[2940]: I0123 18:32:23.700889 2940 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:32:23.706595 kubelet[2940]: I0123 18:32:23.706577 2940 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 18:32:23.707614 kubelet[2940]: I0123 18:32:23.707601 2940 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 18:32:23.707681 kubelet[2940]: I0123 18:32:23.707675 2940 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 18:32:23.707728 kubelet[2940]: I0123 18:32:23.707724 2940 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 18:32:23.707797 kubelet[2940]: E0123 18:32:23.707785 2940 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:32:23.754745 kubelet[2940]: I0123 18:32:23.754726 2940 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:32:23.754902 kubelet[2940]: I0123 18:32:23.754891 2940 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:32:23.755212 kubelet[2940]: I0123 18:32:23.755202 2940 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:32:23.755388 kubelet[2940]: I0123 18:32:23.755378 2940 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:32:23.755443 kubelet[2940]: I0123 18:32:23.755427 2940 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:32:23.755479 kubelet[2940]: I0123 18:32:23.755474 2940 policy_none.go:49] "None policy: Start" Jan 23 18:32:23.755479 kubelet[2940]: I0123 18:32:23.755495 2940 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 18:32:23.755479 kubelet[2940]: I0123 18:32:23.755504 2940 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 18:32:23.756206 kubelet[2940]: I0123 18:32:23.755743 2940 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 18:32:23.756206 kubelet[2940]: I0123 18:32:23.755753 2940 policy_none.go:47] "Start" Jan 23 18:32:23.759064 kubelet[2940]: E0123 18:32:23.759049 2940 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:32:23.759445 kubelet[2940]: I0123 18:32:23.759434 2940 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:32:23.759513 kubelet[2940]: I0123 18:32:23.759493 2940 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:32:23.759765 kubelet[2940]: I0123 18:32:23.759751 2940 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:32:23.762589 kubelet[2940]: E0123 18:32:23.762571 2940 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:32:23.808919 kubelet[2940]: I0123 18:32:23.808893 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.809272 kubelet[2940]: I0123 18:32:23.809135 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.811336 kubelet[2940]: I0123 18:32:23.809220 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.866664 kubelet[2940]: I0123 18:32:23.866452 2940 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.883132 kubelet[2940]: I0123 18:32:23.883099 2940 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.883402 kubelet[2940]: I0123 18:32:23.883393 2940 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.900140 kubelet[2940]: I0123 18:32:23.900114 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/267fdf13591e1c8f25d11ea3652c4c45-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"267fdf13591e1c8f25d11ea3652c4c45\") " pod="kube-system/kube-apiserver-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.900434 kubelet[2940]: I0123 18:32:23.900258 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a09cbaef23579d43819a06118324279-ca-certs\") pod \"kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"9a09cbaef23579d43819a06118324279\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.900434 kubelet[2940]: I0123 18:32:23.900280 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a09cbaef23579d43819a06118324279-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"9a09cbaef23579d43819a06118324279\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.900434 kubelet[2940]: I0123 18:32:23.900305 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a09cbaef23579d43819a06118324279-k8s-certs\") pod \"kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"9a09cbaef23579d43819a06118324279\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.900434 kubelet[2940]: I0123 18:32:23.900325 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a09cbaef23579d43819a06118324279-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"9a09cbaef23579d43819a06118324279\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.900434 kubelet[2940]: I0123 18:32:23.900342 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/267fdf13591e1c8f25d11ea3652c4c45-ca-certs\") pod \"kube-apiserver-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"267fdf13591e1c8f25d11ea3652c4c45\") " pod="kube-system/kube-apiserver-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.900586 kubelet[2940]: I0123 18:32:23.900357 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/267fdf13591e1c8f25d11ea3652c4c45-k8s-certs\") pod \"kube-apiserver-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"267fdf13591e1c8f25d11ea3652c4c45\") " pod="kube-system/kube-apiserver-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.900586 kubelet[2940]: I0123 18:32:23.900373 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a09cbaef23579d43819a06118324279-kubeconfig\") pod \"kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"9a09cbaef23579d43819a06118324279\") " pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:23.900586 kubelet[2940]: I0123 18:32:23.900390 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de6a4a660a81d7cd7b6d81442545c79c-kubeconfig\") pod \"kube-scheduler-ci-4547-1-0-1-5b0cac0ed6\" (UID: \"de6a4a660a81d7cd7b6d81442545c79c\") " pod="kube-system/kube-scheduler-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:24.673558 kubelet[2940]: I0123 18:32:24.673483 2940 apiserver.go:52] "Watching apiserver" Jan 23 18:32:24.699103 kubelet[2940]: I0123 18:32:24.699042 2940 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:32:24.746245 kubelet[2940]: I0123 18:32:24.745323 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:24.759838 kubelet[2940]: E0123 18:32:24.759751 2940 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-1-0-1-5b0cac0ed6\" already exists" pod="kube-system/kube-scheduler-ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:32:24.787290 kubelet[2940]: I0123 18:32:24.787046 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547-1-0-1-5b0cac0ed6" podStartSLOduration=1.78702476 podStartE2EDuration="1.78702476s" podCreationTimestamp="2026-01-23 18:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:32:24.785857128 +0000 UTC m=+1.170628179" watchObservedRunningTime="2026-01-23 18:32:24.78702476 +0000 UTC m=+1.171795799" Jan 23 18:32:24.812560 kubelet[2940]: I0123 18:32:24.812402 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547-1-0-1-5b0cac0ed6" podStartSLOduration=1.8123823799999998 podStartE2EDuration="1.81238238s" podCreationTimestamp="2026-01-23 18:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:32:24.798101009 +0000 UTC m=+1.182872099" watchObservedRunningTime="2026-01-23 18:32:24.81238238 +0000 UTC m=+1.197153420" Jan 23 18:32:24.826183 kubelet[2940]: I0123 18:32:24.826114 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547-1-0-1-5b0cac0ed6" podStartSLOduration=1.826097735 podStartE2EDuration="1.826097735s" podCreationTimestamp="2026-01-23 18:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:32:24.813360017 +0000 UTC m=+1.198131076" watchObservedRunningTime="2026-01-23 18:32:24.826097735 +0000 UTC m=+1.210868786" Jan 23 18:32:29.966261 kubelet[2940]: I0123 18:32:29.965979 2940 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:32:29.967045 kubelet[2940]: I0123 18:32:29.966979 2940 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:32:29.967125 containerd[1695]: time="2026-01-23T18:32:29.966444356Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:32:30.186214 systemd[1]: Created slice kubepods-besteffort-pod01b92887_18bf_48d9_8e38_e043a97e884a.slice - libcontainer container kubepods-besteffort-pod01b92887_18bf_48d9_8e38_e043a97e884a.slice. Jan 23 18:32:30.240220 kubelet[2940]: I0123 18:32:30.240035 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01b92887-18bf-48d9-8e38-e043a97e884a-xtables-lock\") pod \"kube-proxy-p5m2l\" (UID: \"01b92887-18bf-48d9-8e38-e043a97e884a\") " pod="kube-system/kube-proxy-p5m2l" Jan 23 18:32:30.240220 kubelet[2940]: I0123 18:32:30.240108 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01b92887-18bf-48d9-8e38-e043a97e884a-lib-modules\") pod \"kube-proxy-p5m2l\" (UID: \"01b92887-18bf-48d9-8e38-e043a97e884a\") " pod="kube-system/kube-proxy-p5m2l" Jan 23 18:32:30.240220 kubelet[2940]: I0123 18:32:30.240142 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-647x9\" (UniqueName: \"kubernetes.io/projected/01b92887-18bf-48d9-8e38-e043a97e884a-kube-api-access-647x9\") pod \"kube-proxy-p5m2l\" (UID: \"01b92887-18bf-48d9-8e38-e043a97e884a\") " pod="kube-system/kube-proxy-p5m2l" Jan 23 18:32:30.240220 kubelet[2940]: I0123 18:32:30.240179 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/01b92887-18bf-48d9-8e38-e043a97e884a-kube-proxy\") pod \"kube-proxy-p5m2l\" (UID: \"01b92887-18bf-48d9-8e38-e043a97e884a\") " pod="kube-system/kube-proxy-p5m2l" Jan 23 18:32:30.355007 kubelet[2940]: E0123 18:32:30.354937 2940 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 18:32:30.355007 kubelet[2940]: E0123 18:32:30.354997 2940 projected.go:196] Error preparing data for projected volume kube-api-access-647x9 for pod kube-system/kube-proxy-p5m2l: configmap "kube-root-ca.crt" not found Jan 23 18:32:30.355281 kubelet[2940]: E0123 18:32:30.355131 2940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01b92887-18bf-48d9-8e38-e043a97e884a-kube-api-access-647x9 podName:01b92887-18bf-48d9-8e38-e043a97e884a nodeName:}" failed. No retries permitted until 2026-01-23 18:32:30.855085765 +0000 UTC m=+7.239856847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-647x9" (UniqueName: "kubernetes.io/projected/01b92887-18bf-48d9-8e38-e043a97e884a-kube-api-access-647x9") pod "kube-proxy-p5m2l" (UID: "01b92887-18bf-48d9-8e38-e043a97e884a") : configmap "kube-root-ca.crt" not found Jan 23 18:32:30.946085 kubelet[2940]: E0123 18:32:30.946030 2940 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 18:32:30.946085 kubelet[2940]: E0123 18:32:30.946088 2940 projected.go:196] Error preparing data for projected volume kube-api-access-647x9 for pod kube-system/kube-proxy-p5m2l: configmap "kube-root-ca.crt" not found Jan 23 18:32:30.946309 kubelet[2940]: E0123 18:32:30.946222 2940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01b92887-18bf-48d9-8e38-e043a97e884a-kube-api-access-647x9 podName:01b92887-18bf-48d9-8e38-e043a97e884a nodeName:}" failed. No retries permitted until 2026-01-23 18:32:31.946185398 +0000 UTC m=+8.330956435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-647x9" (UniqueName: "kubernetes.io/projected/01b92887-18bf-48d9-8e38-e043a97e884a-kube-api-access-647x9") pod "kube-proxy-p5m2l" (UID: "01b92887-18bf-48d9-8e38-e043a97e884a") : configmap "kube-root-ca.crt" not found Jan 23 18:32:31.252300 systemd[1]: Created slice kubepods-besteffort-pod6a317212_5936_4448_b3a3_f54e06fe9387.slice - libcontainer container kubepods-besteffort-pod6a317212_5936_4448_b3a3_f54e06fe9387.slice. Jan 23 18:32:31.348673 kubelet[2940]: I0123 18:32:31.348511 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6a317212-5936-4448-b3a3-f54e06fe9387-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-4njb6\" (UID: \"6a317212-5936-4448-b3a3-f54e06fe9387\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4njb6" Jan 23 18:32:31.349462 kubelet[2940]: I0123 18:32:31.349402 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4l2r\" (UniqueName: \"kubernetes.io/projected/6a317212-5936-4448-b3a3-f54e06fe9387-kube-api-access-k4l2r\") pod \"tigera-operator-65cdcdfd6d-4njb6\" (UID: \"6a317212-5936-4448-b3a3-f54e06fe9387\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4njb6" Jan 23 18:32:31.561006 containerd[1695]: time="2026-01-23T18:32:31.560721757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4njb6,Uid:6a317212-5936-4448-b3a3-f54e06fe9387,Namespace:tigera-operator,Attempt:0,}" Jan 23 18:32:31.613705 containerd[1695]: time="2026-01-23T18:32:31.613549877Z" level=info msg="connecting to shim 8ea1abaccb7a8e79aa482f00c9551910ba53e49eafb043b204886abc9fd21f4c" address="unix:///run/containerd/s/459664e11666a3242c51381cc86d46e86e42b1a949ecb28aa1180c0c2bc29fb8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:32:31.673200 systemd[1]: Started cri-containerd-8ea1abaccb7a8e79aa482f00c9551910ba53e49eafb043b204886abc9fd21f4c.scope - libcontainer container 8ea1abaccb7a8e79aa482f00c9551910ba53e49eafb043b204886abc9fd21f4c. Jan 23 18:32:31.694496 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 23 18:32:31.694622 kernel: audit: type=1334 audit(1769193151.690:426): prog-id=133 op=LOAD Jan 23 18:32:31.690000 audit: BPF prog-id=133 op=LOAD Jan 23 18:32:31.697962 kernel: audit: type=1334 audit(1769193151.695:427): prog-id=134 op=LOAD Jan 23 18:32:31.695000 audit: BPF prog-id=134 op=LOAD Jan 23 18:32:31.695000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2999 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:31.700416 kernel: audit: type=1300 audit(1769193151.695:427): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2999 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:31.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865613161626163636237613865373961613438326630306339353531 Jan 23 18:32:31.707252 kernel: audit: type=1327 audit(1769193151.695:427): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865613161626163636237613865373961613438326630306339353531 Jan 23 18:32:31.712208 kernel: audit: type=1334 audit(1769193151.695:428): prog-id=134 op=UNLOAD Jan 23 18:32:31.695000 audit: BPF prog-id=134 op=UNLOAD Jan 23 18:32:31.695000 audit[3011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2999 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:31.716355 kernel: audit: type=1300 audit(1769193151.695:428): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2999 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:31.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865613161626163636237613865373961613438326630306339353531 Jan 23 18:32:31.722497 kernel: audit: type=1327 audit(1769193151.695:428): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865613161626163636237613865373961613438326630306339353531 Jan 23 18:32:31.695000 audit: BPF prog-id=135 op=LOAD Jan 23 18:32:31.727921 kernel: audit: type=1334 audit(1769193151.695:429): prog-id=135 op=LOAD Jan 23 18:32:31.727972 kernel: audit: type=1300 audit(1769193151.695:429): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2999 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:31.695000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2999 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:31.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865613161626163636237613865373961613438326630306339353531 Jan 23 18:32:31.734777 kernel: audit: type=1327 audit(1769193151.695:429): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865613161626163636237613865373961613438326630306339353531 Jan 23 18:32:31.695000 audit: BPF prog-id=136 op=LOAD Jan 23 18:32:31.695000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2999 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:31.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865613161626163636237613865373961613438326630306339353531 Jan 23 18:32:31.696000 audit: BPF prog-id=136 op=UNLOAD Jan 23 18:32:31.696000 audit[3011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2999 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:31.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865613161626163636237613865373961613438326630306339353531 Jan 23 18:32:31.696000 audit: BPF prog-id=135 op=UNLOAD Jan 23 18:32:31.696000 audit[3011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2999 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:31.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865613161626163636237613865373961613438326630306339353531 Jan 23 18:32:31.696000 audit: BPF prog-id=137 op=LOAD Jan 23 18:32:31.696000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2999 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:31.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865613161626163636237613865373961613438326630306339353531 Jan 23 18:32:31.779554 containerd[1695]: time="2026-01-23T18:32:31.779505111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4njb6,Uid:6a317212-5936-4448-b3a3-f54e06fe9387,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8ea1abaccb7a8e79aa482f00c9551910ba53e49eafb043b204886abc9fd21f4c\"" Jan 23 18:32:31.783035 containerd[1695]: time="2026-01-23T18:32:31.782938382Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 18:32:31.999623 containerd[1695]: time="2026-01-23T18:32:31.999367100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p5m2l,Uid:01b92887-18bf-48d9-8e38-e043a97e884a,Namespace:kube-system,Attempt:0,}" Jan 23 18:32:32.037420 containerd[1695]: time="2026-01-23T18:32:32.037351815Z" level=info msg="connecting to shim b809113f6da5c8427756dd96d11fb4c37df340427de04a0e454ffe62c2543239" address="unix:///run/containerd/s/85882c72b42306fa6df79ca377dff46f62c907cb5cefc1d210d13d4605c6c306" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:32:32.069115 systemd[1]: Started cri-containerd-b809113f6da5c8427756dd96d11fb4c37df340427de04a0e454ffe62c2543239.scope - libcontainer container b809113f6da5c8427756dd96d11fb4c37df340427de04a0e454ffe62c2543239. Jan 23 18:32:32.085000 audit: BPF prog-id=138 op=LOAD Jan 23 18:32:32.085000 audit: BPF prog-id=139 op=LOAD Jan 23 18:32:32.085000 audit[3055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3044 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238303931313366366461356338343237373536646439366431316662 Jan 23 18:32:32.085000 audit: BPF prog-id=139 op=UNLOAD Jan 23 18:32:32.085000 audit[3055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3044 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238303931313366366461356338343237373536646439366431316662 Jan 23 18:32:32.085000 audit: BPF prog-id=140 op=LOAD Jan 23 18:32:32.085000 audit[3055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3044 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238303931313366366461356338343237373536646439366431316662 Jan 23 18:32:32.085000 audit: BPF prog-id=141 op=LOAD Jan 23 18:32:32.085000 audit[3055]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3044 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238303931313366366461356338343237373536646439366431316662 Jan 23 18:32:32.086000 audit: BPF prog-id=141 op=UNLOAD Jan 23 18:32:32.086000 audit[3055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3044 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238303931313366366461356338343237373536646439366431316662 Jan 23 18:32:32.086000 audit: BPF prog-id=140 op=UNLOAD Jan 23 18:32:32.086000 audit[3055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3044 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238303931313366366461356338343237373536646439366431316662 Jan 23 18:32:32.086000 audit: BPF prog-id=142 op=LOAD Jan 23 18:32:32.086000 audit[3055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3044 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238303931313366366461356338343237373536646439366431316662 Jan 23 18:32:32.106617 containerd[1695]: time="2026-01-23T18:32:32.106567730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p5m2l,Uid:01b92887-18bf-48d9-8e38-e043a97e884a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b809113f6da5c8427756dd96d11fb4c37df340427de04a0e454ffe62c2543239\"" Jan 23 18:32:32.113552 containerd[1695]: time="2026-01-23T18:32:32.113495551Z" level=info msg="CreateContainer within sandbox \"b809113f6da5c8427756dd96d11fb4c37df340427de04a0e454ffe62c2543239\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:32:32.126997 containerd[1695]: time="2026-01-23T18:32:32.126970537Z" level=info msg="Container ed483e5ca64a75eb771c4abc1a3b6e36ddebab2b7cb9fedcc90ba1e4063038c3: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:32:32.136827 containerd[1695]: time="2026-01-23T18:32:32.136775112Z" level=info msg="CreateContainer within sandbox \"b809113f6da5c8427756dd96d11fb4c37df340427de04a0e454ffe62c2543239\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ed483e5ca64a75eb771c4abc1a3b6e36ddebab2b7cb9fedcc90ba1e4063038c3\"" Jan 23 18:32:32.138831 containerd[1695]: time="2026-01-23T18:32:32.137289661Z" level=info msg="StartContainer for \"ed483e5ca64a75eb771c4abc1a3b6e36ddebab2b7cb9fedcc90ba1e4063038c3\"" Jan 23 18:32:32.138831 containerd[1695]: time="2026-01-23T18:32:32.138345857Z" level=info msg="connecting to shim ed483e5ca64a75eb771c4abc1a3b6e36ddebab2b7cb9fedcc90ba1e4063038c3" address="unix:///run/containerd/s/85882c72b42306fa6df79ca377dff46f62c907cb5cefc1d210d13d4605c6c306" protocol=ttrpc version=3 Jan 23 18:32:32.155975 systemd[1]: Started cri-containerd-ed483e5ca64a75eb771c4abc1a3b6e36ddebab2b7cb9fedcc90ba1e4063038c3.scope - libcontainer container ed483e5ca64a75eb771c4abc1a3b6e36ddebab2b7cb9fedcc90ba1e4063038c3. Jan 23 18:32:32.195000 audit: BPF prog-id=143 op=LOAD Jan 23 18:32:32.195000 audit[3081]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3044 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343833653563613634613735656237373163346162633161336236 Jan 23 18:32:32.195000 audit: BPF prog-id=144 op=LOAD Jan 23 18:32:32.195000 audit[3081]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3044 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343833653563613634613735656237373163346162633161336236 Jan 23 18:32:32.195000 audit: BPF prog-id=144 op=UNLOAD Jan 23 18:32:32.195000 audit[3081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3044 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343833653563613634613735656237373163346162633161336236 Jan 23 18:32:32.195000 audit: BPF prog-id=143 op=UNLOAD Jan 23 18:32:32.195000 audit[3081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3044 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343833653563613634613735656237373163346162633161336236 Jan 23 18:32:32.195000 audit: BPF prog-id=145 op=LOAD Jan 23 18:32:32.195000 audit[3081]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3044 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343833653563613634613735656237373163346162633161336236 Jan 23 18:32:32.221933 containerd[1695]: time="2026-01-23T18:32:32.221899513Z" level=info msg="StartContainer for \"ed483e5ca64a75eb771c4abc1a3b6e36ddebab2b7cb9fedcc90ba1e4063038c3\" returns successfully" Jan 23 18:32:32.426000 audit[3144]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.426000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd83723a70 a2=0 a3=7ffd83723a5c items=0 ppid=3094 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 18:32:32.428000 audit[3145]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.428000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcea0befb0 a2=0 a3=7ffcea0bef9c items=0 ppid=3094 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.428000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 18:32:32.430000 audit[3150]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.430000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8412cbf0 a2=0 a3=7ffc8412cbdc items=0 ppid=3094 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 18:32:32.431000 audit[3149]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.431000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe20c05b00 a2=0 a3=7ffe20c05aec items=0 ppid=3094 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 18:32:32.434000 audit[3153]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.434000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd43286880 a2=0 a3=7ffd4328686c items=0 ppid=3094 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 18:32:32.437000 audit[3154]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.437000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb98da7b0 a2=0 a3=7ffdb98da79c items=0 ppid=3094 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 18:32:32.534000 audit[3155]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.534000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffee38b3420 a2=0 a3=7ffee38b340c items=0 ppid=3094 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.534000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 18:32:32.539000 audit[3157]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.539000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffea099bc70 a2=0 a3=7ffea099bc5c items=0 ppid=3094 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.539000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 23 18:32:32.544000 audit[3160]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.544000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc1bb45450 a2=0 a3=7ffc1bb4543c items=0 ppid=3094 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 23 18:32:32.546000 audit[3161]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.546000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc27ed7190 a2=0 a3=7ffc27ed717c items=0 ppid=3094 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 18:32:32.549000 audit[3163]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.549000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea137fa10 a2=0 a3=7ffea137f9fc items=0 ppid=3094 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 18:32:32.550000 audit[3164]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.550000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffced03cb90 a2=0 a3=7ffced03cb7c items=0 ppid=3094 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.550000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 18:32:32.552000 audit[3166]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.552000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdb8e49740 a2=0 a3=7ffdb8e4972c items=0 ppid=3094 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:32:32.556000 audit[3169]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.556000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff4e675970 a2=0 a3=7fff4e67595c items=0 ppid=3094 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.556000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:32:32.557000 audit[3170]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.557000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5d22f660 a2=0 a3=7ffd5d22f64c items=0 ppid=3094 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 18:32:32.560000 audit[3172]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.560000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdcfa861f0 a2=0 a3=7ffdcfa861dc items=0 ppid=3094 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 18:32:32.561000 audit[3173]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.561000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe968ee0a0 a2=0 a3=7ffe968ee08c items=0 ppid=3094 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.561000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 18:32:32.564000 audit[3175]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.564000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd1182e620 a2=0 a3=7ffd1182e60c items=0 ppid=3094 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 23 18:32:32.567000 audit[3178]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.567000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe963ea020 a2=0 a3=7ffe963ea00c items=0 ppid=3094 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 23 18:32:32.572000 audit[3181]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.572000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf54eaa30 a2=0 a3=7ffcf54eaa1c items=0 ppid=3094 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 23 18:32:32.573000 audit[3182]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.573000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff8266cd90 a2=0 a3=7fff8266cd7c items=0 ppid=3094 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 18:32:32.575000 audit[3184]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.575000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcb1519a60 a2=0 a3=7ffcb1519a4c items=0 ppid=3094 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:32:32.579000 audit[3187]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.579000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd312b9ab0 a2=0 a3=7ffd312b9a9c items=0 ppid=3094 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:32:32.580000 audit[3188]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.580000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe17391e10 a2=0 a3=7ffe17391dfc items=0 ppid=3094 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 18:32:32.583000 audit[3190]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:32:32.583000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe1e124830 a2=0 a3=7ffe1e12481c items=0 ppid=3094 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 18:32:32.605000 audit[3196]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:32.605000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6ea225c0 a2=0 a3=7ffc6ea225ac items=0 ppid=3094 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:32.615000 audit[3196]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:32.615000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc6ea225c0 a2=0 a3=7ffc6ea225ac items=0 ppid=3094 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:32.616000 audit[3201]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.616000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc7e6927c0 a2=0 a3=7ffc7e6927ac items=0 ppid=3094 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 18:32:32.619000 audit[3203]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.619000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc6ff0a860 a2=0 a3=7ffc6ff0a84c items=0 ppid=3094 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.619000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 23 18:32:32.623000 audit[3206]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3206 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.623000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff0e2bd440 a2=0 a3=7fff0e2bd42c items=0 ppid=3094 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 23 18:32:32.624000 audit[3207]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.624000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdeb4bc790 a2=0 a3=7ffdeb4bc77c items=0 ppid=3094 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.624000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 18:32:32.627000 audit[3209]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.627000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc48e24ce0 a2=0 a3=7ffc48e24ccc items=0 ppid=3094 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.627000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 18:32:32.628000 audit[3210]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.628000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6277ee20 a2=0 a3=7fff6277ee0c items=0 ppid=3094 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.628000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 18:32:32.630000 audit[3212]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.630000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd861ef300 a2=0 a3=7ffd861ef2ec items=0 ppid=3094 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.630000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:32:32.634000 audit[3215]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.634000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe2807f740 a2=0 a3=7ffe2807f72c items=0 ppid=3094 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.634000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:32:32.636000 audit[3216]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3216 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.636000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8cd4a3a0 a2=0 a3=7ffc8cd4a38c items=0 ppid=3094 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 18:32:32.638000 audit[3218]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.638000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe763dc680 a2=0 a3=7ffe763dc66c items=0 ppid=3094 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.638000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 18:32:32.640000 audit[3219]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.640000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe52e7d8d0 a2=0 a3=7ffe52e7d8bc items=0 ppid=3094 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.640000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 18:32:32.644000 audit[3221]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.644000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee1b9e240 a2=0 a3=7ffee1b9e22c items=0 ppid=3094 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 23 18:32:32.647000 audit[3224]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.647000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe287f7eb0 a2=0 a3=7ffe287f7e9c items=0 ppid=3094 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 23 18:32:32.651000 audit[3227]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.651000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcaf8cd7a0 a2=0 a3=7ffcaf8cd78c items=0 ppid=3094 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 23 18:32:32.652000 audit[3228]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.652000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffaf1c6060 a2=0 a3=7fffaf1c604c items=0 ppid=3094 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.652000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 18:32:32.655000 audit[3230]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.655000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffff598c430 a2=0 a3=7ffff598c41c items=0 ppid=3094 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.655000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:32:32.659000 audit[3233]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.659000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeb15a04c0 a2=0 a3=7ffeb15a04ac items=0 ppid=3094 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:32:32.660000 audit[3234]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.660000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc730133e0 a2=0 a3=7ffc730133cc items=0 ppid=3094 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 18:32:32.662000 audit[3236]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.662000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe15889820 a2=0 a3=7ffe1588980c items=0 ppid=3094 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 18:32:32.664000 audit[3237]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.664000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1b1bec20 a2=0 a3=7ffc1b1bec0c items=0 ppid=3094 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.664000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 18:32:32.666000 audit[3239]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.666000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff403aecb0 a2=0 a3=7fff403aec9c items=0 ppid=3094 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.666000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:32:32.669000 audit[3242]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:32:32.669000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe51244cb0 a2=0 a3=7ffe51244c9c items=0 ppid=3094 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:32:32.673000 audit[3244]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 18:32:32.673000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe18946d80 a2=0 a3=7ffe18946d6c items=0 ppid=3094 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.673000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:32.673000 audit[3244]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 18:32:32.673000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe18946d80 a2=0 a3=7ffe18946d6c items=0 ppid=3094 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:32.673000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:32.784279 kubelet[2940]: I0123 18:32:32.784075 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p5m2l" podStartSLOduration=2.784051414 podStartE2EDuration="2.784051414s" podCreationTimestamp="2026-01-23 18:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:32:32.78273633 +0000 UTC m=+9.167507409" watchObservedRunningTime="2026-01-23 18:32:32.784051414 +0000 UTC m=+9.168822456" Jan 23 18:32:33.451182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170927080.mount: Deactivated successfully. Jan 23 18:32:34.273508 containerd[1695]: time="2026-01-23T18:32:34.273458065Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:34.274807 containerd[1695]: time="2026-01-23T18:32:34.274681349Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 23 18:32:34.276114 containerd[1695]: time="2026-01-23T18:32:34.276093874Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:34.278963 containerd[1695]: time="2026-01-23T18:32:34.278939733Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:34.279561 containerd[1695]: time="2026-01-23T18:32:34.279511150Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.496445169s" Jan 23 18:32:34.279640 containerd[1695]: time="2026-01-23T18:32:34.279627418Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 18:32:34.284640 containerd[1695]: time="2026-01-23T18:32:34.284615390Z" level=info msg="CreateContainer within sandbox \"8ea1abaccb7a8e79aa482f00c9551910ba53e49eafb043b204886abc9fd21f4c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 18:32:34.297236 containerd[1695]: time="2026-01-23T18:32:34.297203339Z" level=info msg="Container a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:32:34.304031 containerd[1695]: time="2026-01-23T18:32:34.303994281Z" level=info msg="CreateContainer within sandbox \"8ea1abaccb7a8e79aa482f00c9551910ba53e49eafb043b204886abc9fd21f4c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48\"" Jan 23 18:32:34.305214 containerd[1695]: time="2026-01-23T18:32:34.305125741Z" level=info msg="StartContainer for \"a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48\"" Jan 23 18:32:34.306045 containerd[1695]: time="2026-01-23T18:32:34.306024805Z" level=info msg="connecting to shim a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48" address="unix:///run/containerd/s/459664e11666a3242c51381cc86d46e86e42b1a949ecb28aa1180c0c2bc29fb8" protocol=ttrpc version=3 Jan 23 18:32:34.324983 systemd[1]: Started cri-containerd-a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48.scope - libcontainer container a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48. Jan 23 18:32:34.334000 audit: BPF prog-id=146 op=LOAD Jan 23 18:32:34.334000 audit: BPF prog-id=147 op=LOAD Jan 23 18:32:34.334000 audit[3253]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2999 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:34.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131366365636232613336313734353435323365646563383932316537 Jan 23 18:32:34.334000 audit: BPF prog-id=147 op=UNLOAD Jan 23 18:32:34.334000 audit[3253]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2999 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:34.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131366365636232613336313734353435323365646563383932316537 Jan 23 18:32:34.334000 audit: BPF prog-id=148 op=LOAD Jan 23 18:32:34.334000 audit[3253]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2999 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:34.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131366365636232613336313734353435323365646563383932316537 Jan 23 18:32:34.334000 audit: BPF prog-id=149 op=LOAD Jan 23 18:32:34.334000 audit[3253]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2999 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:34.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131366365636232613336313734353435323365646563383932316537 Jan 23 18:32:34.334000 audit: BPF prog-id=149 op=UNLOAD Jan 23 18:32:34.334000 audit[3253]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2999 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:34.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131366365636232613336313734353435323365646563383932316537 Jan 23 18:32:34.334000 audit: BPF prog-id=148 op=UNLOAD Jan 23 18:32:34.334000 audit[3253]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2999 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:34.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131366365636232613336313734353435323365646563383932316537 Jan 23 18:32:34.334000 audit: BPF prog-id=150 op=LOAD Jan 23 18:32:34.334000 audit[3253]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2999 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:34.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131366365636232613336313734353435323365646563383932316537 Jan 23 18:32:34.354961 containerd[1695]: time="2026-01-23T18:32:34.354933249Z" level=info msg="StartContainer for \"a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48\" returns successfully" Jan 23 18:32:34.797325 kubelet[2940]: I0123 18:32:34.796497 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-4njb6" podStartSLOduration=1.297530171 podStartE2EDuration="3.796462484s" podCreationTimestamp="2026-01-23 18:32:31 +0000 UTC" firstStartedPulling="2026-01-23 18:32:31.781365232 +0000 UTC m=+8.166136221" lastFinishedPulling="2026-01-23 18:32:34.280297554 +0000 UTC m=+10.665068534" observedRunningTime="2026-01-23 18:32:34.794473386 +0000 UTC m=+11.179244488" watchObservedRunningTime="2026-01-23 18:32:34.796462484 +0000 UTC m=+11.181233589" Jan 23 18:32:39.869136 sudo[1967]: pam_unix(sudo:session): session closed for user root Jan 23 18:32:39.870857 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 23 18:32:39.870924 kernel: audit: type=1106 audit(1769193159.868:506): pid=1967 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:32:39.868000 audit[1967]: USER_END pid=1967 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:32:39.868000 audit[1967]: CRED_DISP pid=1967 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:32:39.879872 kernel: audit: type=1104 audit(1769193159.868:507): pid=1967 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:32:39.964592 sshd[1966]: Connection closed by 68.220.241.50 port 53016 Jan 23 18:32:39.965363 sshd-session[1962]: pam_unix(sshd:session): session closed for user core Jan 23 18:32:39.965000 audit[1962]: USER_END pid=1962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:32:39.971837 kernel: audit: type=1106 audit(1769193159.965:508): pid=1962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:32:39.973102 systemd[1]: sshd@8-10.0.6.238:22-68.220.241.50:53016.service: Deactivated successfully. Jan 23 18:32:39.965000 audit[1962]: CRED_DISP pid=1962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:32:39.977582 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:32:39.977938 kernel: audit: type=1104 audit(1769193159.965:509): pid=1962 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:32:39.978027 systemd[1]: session-10.scope: Consumed 5.753s CPU time, 230.3M memory peak. Jan 23 18:32:39.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.6.238:22-68.220.241.50:53016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:39.982829 kernel: audit: type=1131 audit(1769193159.972:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.6.238:22-68.220.241.50:53016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:32:39.982943 systemd-logind[1655]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:32:39.984858 systemd-logind[1655]: Removed session 10. Jan 23 18:32:40.654000 audit[3336]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:40.657838 kernel: audit: type=1325 audit(1769193160.654:511): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:40.654000 audit[3336]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc881ab430 a2=0 a3=7ffc881ab41c items=0 ppid=3094 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:40.663856 kernel: audit: type=1300 audit(1769193160.654:511): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc881ab430 a2=0 a3=7ffc881ab41c items=0 ppid=3094 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:40.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:40.665851 kernel: audit: type=1327 audit(1769193160.654:511): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:40.659000 audit[3336]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:40.668887 kernel: audit: type=1325 audit(1769193160.659:512): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:40.659000 audit[3336]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc881ab430 a2=0 a3=0 items=0 ppid=3094 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:40.675828 kernel: audit: type=1300 audit(1769193160.659:512): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc881ab430 a2=0 a3=0 items=0 ppid=3094 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:40.659000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:41.820000 audit[3338]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:41.820000 audit[3338]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff7d3337d0 a2=0 a3=7fff7d3337bc items=0 ppid=3094 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:41.820000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:41.829000 audit[3338]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:41.829000 audit[3338]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7d3337d0 a2=0 a3=0 items=0 ppid=3094 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:41.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:43.164000 audit[3340]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:43.164000 audit[3340]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcf4aba750 a2=0 a3=7ffcf4aba73c items=0 ppid=3094 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:43.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:43.168000 audit[3340]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:43.168000 audit[3340]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf4aba750 a2=0 a3=0 items=0 ppid=3094 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:43.168000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:44.178000 audit[3342]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:44.178000 audit[3342]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcdc9e80e0 a2=0 a3=7ffcdc9e80cc items=0 ppid=3094 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:44.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:44.182000 audit[3342]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:44.182000 audit[3342]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcdc9e80e0 a2=0 a3=0 items=0 ppid=3094 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:44.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:44.815433 systemd[1]: Created slice kubepods-besteffort-pod896ccd11_2f82_405b_8a4f_f3a215f55480.slice - libcontainer container kubepods-besteffort-pod896ccd11_2f82_405b_8a4f_f3a215f55480.slice. Jan 23 18:32:44.851428 kubelet[2940]: I0123 18:32:44.851336 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/896ccd11-2f82-405b-8a4f-f3a215f55480-typha-certs\") pod \"calico-typha-675bffb7-5z52p\" (UID: \"896ccd11-2f82-405b-8a4f-f3a215f55480\") " pod="calico-system/calico-typha-675bffb7-5z52p" Jan 23 18:32:44.851428 kubelet[2940]: I0123 18:32:44.851380 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/896ccd11-2f82-405b-8a4f-f3a215f55480-tigera-ca-bundle\") pod \"calico-typha-675bffb7-5z52p\" (UID: \"896ccd11-2f82-405b-8a4f-f3a215f55480\") " pod="calico-system/calico-typha-675bffb7-5z52p" Jan 23 18:32:44.851428 kubelet[2940]: I0123 18:32:44.851403 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m4k5\" (UniqueName: \"kubernetes.io/projected/896ccd11-2f82-405b-8a4f-f3a215f55480-kube-api-access-5m4k5\") pod \"calico-typha-675bffb7-5z52p\" (UID: \"896ccd11-2f82-405b-8a4f-f3a215f55480\") " pod="calico-system/calico-typha-675bffb7-5z52p" Jan 23 18:32:44.981686 systemd[1]: Created slice kubepods-besteffort-pod801abf44_1cab_42bb_b518_0325722a32d9.slice - libcontainer container kubepods-besteffort-pod801abf44_1cab_42bb_b518_0325722a32d9.slice. Jan 23 18:32:45.053099 kubelet[2940]: I0123 18:32:45.053049 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/801abf44-1cab-42bb-b518-0325722a32d9-var-lib-calico\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.053428 kubelet[2940]: I0123 18:32:45.053284 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/801abf44-1cab-42bb-b518-0325722a32d9-var-run-calico\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.053428 kubelet[2940]: I0123 18:32:45.053306 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/801abf44-1cab-42bb-b518-0325722a32d9-flexvol-driver-host\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.053428 kubelet[2940]: I0123 18:32:45.053362 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/801abf44-1cab-42bb-b518-0325722a32d9-xtables-lock\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.053428 kubelet[2940]: I0123 18:32:45.053382 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/801abf44-1cab-42bb-b518-0325722a32d9-node-certs\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.053428 kubelet[2940]: I0123 18:32:45.053396 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq87r\" (UniqueName: \"kubernetes.io/projected/801abf44-1cab-42bb-b518-0325722a32d9-kube-api-access-xq87r\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.053765 kubelet[2940]: I0123 18:32:45.053549 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/801abf44-1cab-42bb-b518-0325722a32d9-cni-bin-dir\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.053765 kubelet[2940]: I0123 18:32:45.053669 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/801abf44-1cab-42bb-b518-0325722a32d9-cni-log-dir\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.053765 kubelet[2940]: I0123 18:32:45.053684 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/801abf44-1cab-42bb-b518-0325722a32d9-lib-modules\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.053765 kubelet[2940]: I0123 18:32:45.053698 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/801abf44-1cab-42bb-b518-0325722a32d9-cni-net-dir\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.053933 kubelet[2940]: I0123 18:32:45.053711 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/801abf44-1cab-42bb-b518-0325722a32d9-policysync\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.054007 kubelet[2940]: I0123 18:32:45.053971 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/801abf44-1cab-42bb-b518-0325722a32d9-tigera-ca-bundle\") pod \"calico-node-vb4p4\" (UID: \"801abf44-1cab-42bb-b518-0325722a32d9\") " pod="calico-system/calico-node-vb4p4" Jan 23 18:32:45.125939 containerd[1695]: time="2026-01-23T18:32:45.125273735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-675bffb7-5z52p,Uid:896ccd11-2f82-405b-8a4f-f3a215f55480,Namespace:calico-system,Attempt:0,}" Jan 23 18:32:45.170222 kubelet[2940]: E0123 18:32:45.170017 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.170222 kubelet[2940]: W0123 18:32:45.170114 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.170892 kubelet[2940]: E0123 18:32:45.170446 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.171542 kubelet[2940]: E0123 18:32:45.171431 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.171542 kubelet[2940]: W0123 18:32:45.171443 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.171542 kubelet[2940]: E0123 18:32:45.171459 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.172677 kubelet[2940]: E0123 18:32:45.171919 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.172677 kubelet[2940]: W0123 18:32:45.171932 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.172677 kubelet[2940]: E0123 18:32:45.171944 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.173686 kubelet[2940]: E0123 18:32:45.173383 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.174302 kubelet[2940]: W0123 18:32:45.174232 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.174302 kubelet[2940]: E0123 18:32:45.174248 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.174554 kubelet[2940]: E0123 18:32:45.174547 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.176892 kubelet[2940]: W0123 18:32:45.176871 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.176956 kubelet[2940]: E0123 18:32:45.176947 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.177787 kubelet[2940]: E0123 18:32:45.177734 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.177787 kubelet[2940]: W0123 18:32:45.177745 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.177787 kubelet[2940]: E0123 18:32:45.177754 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.178968 kubelet[2940]: E0123 18:32:45.173993 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:32:45.179092 kubelet[2940]: E0123 18:32:45.179066 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.179092 kubelet[2940]: W0123 18:32:45.179074 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.179092 kubelet[2940]: E0123 18:32:45.179083 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.179506 kubelet[2940]: E0123 18:32:45.179456 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.179550 kubelet[2940]: W0123 18:32:45.179542 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.179605 kubelet[2940]: E0123 18:32:45.179576 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.179833 kubelet[2940]: E0123 18:32:45.179798 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.179833 kubelet[2940]: W0123 18:32:45.179806 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.179833 kubelet[2940]: E0123 18:32:45.179822 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.180066 kubelet[2940]: E0123 18:32:45.180054 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.180189 kubelet[2940]: W0123 18:32:45.180098 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.180189 kubelet[2940]: E0123 18:32:45.180107 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.180310 kubelet[2940]: E0123 18:32:45.180304 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.180350 kubelet[2940]: W0123 18:32:45.180344 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.180391 kubelet[2940]: E0123 18:32:45.180376 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.180563 kubelet[2940]: E0123 18:32:45.180541 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.180563 kubelet[2940]: W0123 18:32:45.180549 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.180563 kubelet[2940]: E0123 18:32:45.180555 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.180797 kubelet[2940]: E0123 18:32:45.180776 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.180797 kubelet[2940]: W0123 18:32:45.180783 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.180797 kubelet[2940]: E0123 18:32:45.180790 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.181087 kubelet[2940]: E0123 18:32:45.181039 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.181087 kubelet[2940]: W0123 18:32:45.181046 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.181087 kubelet[2940]: E0123 18:32:45.181053 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.181309 kubelet[2940]: E0123 18:32:45.181243 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.181309 kubelet[2940]: W0123 18:32:45.181250 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.181309 kubelet[2940]: E0123 18:32:45.181256 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.181692 kubelet[2940]: E0123 18:32:45.181510 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.181692 kubelet[2940]: W0123 18:32:45.181517 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.181692 kubelet[2940]: E0123 18:32:45.181523 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.181912 kubelet[2940]: E0123 18:32:45.181898 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.181949 kubelet[2940]: W0123 18:32:45.181912 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.181949 kubelet[2940]: E0123 18:32:45.181922 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.194473 containerd[1695]: time="2026-01-23T18:32:45.194424030Z" level=info msg="connecting to shim b98036c6af5b9e99f3940820384177ec56cdc88ee34fe36fb3ea88e8ac2e730b" address="unix:///run/containerd/s/41f4009afeadabb58cf22c182b73f84a714ceb949c44e6c80e3741ff32492a44" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:32:45.198162 kubelet[2940]: E0123 18:32:45.198121 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.199271 kubelet[2940]: W0123 18:32:45.198141 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.199271 kubelet[2940]: E0123 18:32:45.199177 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.215439 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 23 18:32:45.215546 kernel: audit: type=1325 audit(1769193165.210:519): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:45.210000 audit[3395]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:45.220964 kernel: audit: type=1300 audit(1769193165.210:519): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcbd186b70 a2=0 a3=7ffcbd186b5c items=0 ppid=3094 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.210000 audit[3395]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcbd186b70 a2=0 a3=7ffcbd186b5c items=0 ppid=3094 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:45.227925 kernel: audit: type=1327 audit(1769193165.210:519): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:45.228048 kernel: audit: type=1325 audit(1769193165.221:520): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:45.221000 audit[3395]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:32:45.221000 audit[3395]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcbd186b70 a2=0 a3=0 items=0 ppid=3094 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.236743 kernel: audit: type=1300 audit(1769193165.221:520): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcbd186b70 a2=0 a3=0 items=0 ppid=3094 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:45.239837 kernel: audit: type=1327 audit(1769193165.221:520): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:32:45.245103 kubelet[2940]: E0123 18:32:45.245082 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.245103 kubelet[2940]: W0123 18:32:45.245100 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.245206 kubelet[2940]: E0123 18:32:45.245123 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.245836 kubelet[2940]: E0123 18:32:45.245251 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.245836 kubelet[2940]: W0123 18:32:45.245260 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.245836 kubelet[2940]: E0123 18:32:45.245266 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.245836 kubelet[2940]: E0123 18:32:45.245420 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.245836 kubelet[2940]: W0123 18:32:45.245426 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.245836 kubelet[2940]: E0123 18:32:45.245434 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.245836 kubelet[2940]: E0123 18:32:45.245765 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.245836 kubelet[2940]: W0123 18:32:45.245772 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.245836 kubelet[2940]: E0123 18:32:45.245782 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.246214 kubelet[2940]: E0123 18:32:45.245948 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.246214 kubelet[2940]: W0123 18:32:45.245953 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.246214 kubelet[2940]: E0123 18:32:45.245960 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.246214 kubelet[2940]: E0123 18:32:45.246070 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.246214 kubelet[2940]: W0123 18:32:45.246078 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.246214 kubelet[2940]: E0123 18:32:45.246084 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.246214 kubelet[2940]: E0123 18:32:45.246186 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.246214 kubelet[2940]: W0123 18:32:45.246192 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.246214 kubelet[2940]: E0123 18:32:45.246197 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.246380 kubelet[2940]: E0123 18:32:45.246300 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.246380 kubelet[2940]: W0123 18:32:45.246305 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.246380 kubelet[2940]: E0123 18:32:45.246310 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.246437 kubelet[2940]: E0123 18:32:45.246417 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.246437 kubelet[2940]: W0123 18:32:45.246422 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.246437 kubelet[2940]: E0123 18:32:45.246428 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.247876 kubelet[2940]: E0123 18:32:45.246532 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.247876 kubelet[2940]: W0123 18:32:45.246542 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.247876 kubelet[2940]: E0123 18:32:45.246547 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.247876 kubelet[2940]: E0123 18:32:45.246652 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.247876 kubelet[2940]: W0123 18:32:45.246657 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.247876 kubelet[2940]: E0123 18:32:45.246662 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.247876 kubelet[2940]: E0123 18:32:45.246763 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.247876 kubelet[2940]: W0123 18:32:45.246768 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.247876 kubelet[2940]: E0123 18:32:45.246773 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.247876 kubelet[2940]: E0123 18:32:45.246897 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.248132 kubelet[2940]: W0123 18:32:45.246902 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.248132 kubelet[2940]: E0123 18:32:45.246908 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.248132 kubelet[2940]: E0123 18:32:45.247011 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.248132 kubelet[2940]: W0123 18:32:45.247016 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.248132 kubelet[2940]: E0123 18:32:45.247020 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.248132 kubelet[2940]: E0123 18:32:45.247125 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.248132 kubelet[2940]: W0123 18:32:45.247131 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.248132 kubelet[2940]: E0123 18:32:45.247136 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.248132 kubelet[2940]: E0123 18:32:45.247240 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.248132 kubelet[2940]: W0123 18:32:45.247245 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.248321 kubelet[2940]: E0123 18:32:45.247250 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.248321 kubelet[2940]: E0123 18:32:45.247363 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.248321 kubelet[2940]: W0123 18:32:45.247377 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.248321 kubelet[2940]: E0123 18:32:45.247383 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.248321 kubelet[2940]: E0123 18:32:45.247507 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.248321 kubelet[2940]: W0123 18:32:45.247512 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.248321 kubelet[2940]: E0123 18:32:45.247517 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.248321 kubelet[2940]: E0123 18:32:45.247633 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.248321 kubelet[2940]: W0123 18:32:45.247638 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.248321 kubelet[2940]: E0123 18:32:45.247645 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.248770 kubelet[2940]: E0123 18:32:45.247751 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.248770 kubelet[2940]: W0123 18:32:45.247756 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.248770 kubelet[2940]: E0123 18:32:45.247761 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.249023 systemd[1]: Started cri-containerd-b98036c6af5b9e99f3940820384177ec56cdc88ee34fe36fb3ea88e8ac2e730b.scope - libcontainer container b98036c6af5b9e99f3940820384177ec56cdc88ee34fe36fb3ea88e8ac2e730b. Jan 23 18:32:45.257477 kubelet[2940]: E0123 18:32:45.257458 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.257477 kubelet[2940]: W0123 18:32:45.257481 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.257577 kubelet[2940]: E0123 18:32:45.257496 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.257577 kubelet[2940]: I0123 18:32:45.257522 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bbaf1bf-0602-4b75-8639-4ec842393e67-kubelet-dir\") pod \"csi-node-driver-p4bbc\" (UID: \"1bbaf1bf-0602-4b75-8639-4ec842393e67\") " pod="calico-system/csi-node-driver-p4bbc" Jan 23 18:32:45.257939 kubelet[2940]: E0123 18:32:45.257926 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.257939 kubelet[2940]: W0123 18:32:45.257939 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.258002 kubelet[2940]: E0123 18:32:45.257950 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.258002 kubelet[2940]: I0123 18:32:45.257978 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1bbaf1bf-0602-4b75-8639-4ec842393e67-registration-dir\") pod \"csi-node-driver-p4bbc\" (UID: \"1bbaf1bf-0602-4b75-8639-4ec842393e67\") " pod="calico-system/csi-node-driver-p4bbc" Jan 23 18:32:45.258151 kubelet[2940]: E0123 18:32:45.258138 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.258151 kubelet[2940]: W0123 18:32:45.258148 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.258196 kubelet[2940]: E0123 18:32:45.258157 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.258441 kubelet[2940]: I0123 18:32:45.258239 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1bbaf1bf-0602-4b75-8639-4ec842393e67-socket-dir\") pod \"csi-node-driver-p4bbc\" (UID: \"1bbaf1bf-0602-4b75-8639-4ec842393e67\") " pod="calico-system/csi-node-driver-p4bbc" Jan 23 18:32:45.258477 kubelet[2940]: E0123 18:32:45.258457 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.258477 kubelet[2940]: W0123 18:32:45.258464 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.258477 kubelet[2940]: E0123 18:32:45.258473 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.258634 kubelet[2940]: E0123 18:32:45.258622 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.258634 kubelet[2940]: W0123 18:32:45.258631 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.258685 kubelet[2940]: E0123 18:32:45.258637 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.259209 kubelet[2940]: E0123 18:32:45.259194 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.259244 kubelet[2940]: W0123 18:32:45.259213 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.259244 kubelet[2940]: E0123 18:32:45.259225 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.259364 kubelet[2940]: E0123 18:32:45.259355 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.259390 kubelet[2940]: W0123 18:32:45.259364 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.259390 kubelet[2940]: E0123 18:32:45.259381 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.259523 kubelet[2940]: E0123 18:32:45.259514 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.259552 kubelet[2940]: W0123 18:32:45.259532 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.259552 kubelet[2940]: E0123 18:32:45.259540 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.259681 kubelet[2940]: I0123 18:32:45.259616 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b767v\" (UniqueName: \"kubernetes.io/projected/1bbaf1bf-0602-4b75-8639-4ec842393e67-kube-api-access-b767v\") pod \"csi-node-driver-p4bbc\" (UID: \"1bbaf1bf-0602-4b75-8639-4ec842393e67\") " pod="calico-system/csi-node-driver-p4bbc" Jan 23 18:32:45.259838 kubelet[2940]: E0123 18:32:45.259812 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.259838 kubelet[2940]: W0123 18:32:45.259836 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.259930 kubelet[2940]: E0123 18:32:45.259845 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.260000 kubelet[2940]: E0123 18:32:45.259981 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.260000 kubelet[2940]: W0123 18:32:45.259989 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.260000 kubelet[2940]: E0123 18:32:45.259996 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.260500 kubelet[2940]: E0123 18:32:45.260133 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.260500 kubelet[2940]: W0123 18:32:45.260138 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.260500 kubelet[2940]: E0123 18:32:45.260144 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.260500 kubelet[2940]: E0123 18:32:45.260287 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.260500 kubelet[2940]: W0123 18:32:45.260292 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.260500 kubelet[2940]: E0123 18:32:45.260298 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.260500 kubelet[2940]: I0123 18:32:45.260314 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1bbaf1bf-0602-4b75-8639-4ec842393e67-varrun\") pod \"csi-node-driver-p4bbc\" (UID: \"1bbaf1bf-0602-4b75-8639-4ec842393e67\") " pod="calico-system/csi-node-driver-p4bbc" Jan 23 18:32:45.260500 kubelet[2940]: E0123 18:32:45.260430 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.260500 kubelet[2940]: W0123 18:32:45.260437 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.260668 kubelet[2940]: E0123 18:32:45.260443 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.260668 kubelet[2940]: E0123 18:32:45.260603 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.260668 kubelet[2940]: W0123 18:32:45.260609 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.260668 kubelet[2940]: E0123 18:32:45.260615 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.260751 kubelet[2940]: E0123 18:32:45.260742 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.260751 kubelet[2940]: W0123 18:32:45.260750 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.260788 kubelet[2940]: E0123 18:32:45.260756 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.279000 audit: BPF prog-id=151 op=LOAD Jan 23 18:32:45.280000 audit: BPF prog-id=152 op=LOAD Jan 23 18:32:45.284469 kernel: audit: type=1334 audit(1769193165.279:521): prog-id=151 op=LOAD Jan 23 18:32:45.284536 kernel: audit: type=1334 audit(1769193165.280:522): prog-id=152 op=LOAD Jan 23 18:32:45.284551 kernel: audit: type=1300 audit(1769193165.280:522): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3379 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.280000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3379 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239383033366336616635623965393966333934303832303338343137 Jan 23 18:32:45.293860 kernel: audit: type=1327 audit(1769193165.280:522): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239383033366336616635623965393966333934303832303338343137 Jan 23 18:32:45.280000 audit: BPF prog-id=152 op=UNLOAD Jan 23 18:32:45.280000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3379 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239383033366336616635623965393966333934303832303338343137 Jan 23 18:32:45.280000 audit: BPF prog-id=153 op=LOAD Jan 23 18:32:45.280000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3379 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239383033366336616635623965393966333934303832303338343137 Jan 23 18:32:45.280000 audit: BPF prog-id=154 op=LOAD Jan 23 18:32:45.280000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3379 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239383033366336616635623965393966333934303832303338343137 Jan 23 18:32:45.280000 audit: BPF prog-id=154 op=UNLOAD Jan 23 18:32:45.280000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3379 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239383033366336616635623965393966333934303832303338343137 Jan 23 18:32:45.280000 audit: BPF prog-id=153 op=UNLOAD Jan 23 18:32:45.280000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3379 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239383033366336616635623965393966333934303832303338343137 Jan 23 18:32:45.280000 audit: BPF prog-id=155 op=LOAD Jan 23 18:32:45.280000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3379 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239383033366336616635623965393966333934303832303338343137 Jan 23 18:32:45.297832 containerd[1695]: time="2026-01-23T18:32:45.297157967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vb4p4,Uid:801abf44-1cab-42bb-b518-0325722a32d9,Namespace:calico-system,Attempt:0,}" Jan 23 18:32:45.336183 containerd[1695]: time="2026-01-23T18:32:45.336005069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-675bffb7-5z52p,Uid:896ccd11-2f82-405b-8a4f-f3a215f55480,Namespace:calico-system,Attempt:0,} returns sandbox id \"b98036c6af5b9e99f3940820384177ec56cdc88ee34fe36fb3ea88e8ac2e730b\"" Jan 23 18:32:45.340605 containerd[1695]: time="2026-01-23T18:32:45.340459854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 18:32:45.351127 containerd[1695]: time="2026-01-23T18:32:45.351083589Z" level=info msg="connecting to shim fa6b3e5b9e93321d91cc3843698a2a43cf638a5a650b56c375974b8e937a9497" address="unix:///run/containerd/s/dfe0d25e7dc68f7332626192e755d850a6858145d14c99310fca5c5a77742c44" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:32:45.361433 kubelet[2940]: E0123 18:32:45.361404 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.361433 kubelet[2940]: W0123 18:32:45.361424 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.361970 kubelet[2940]: E0123 18:32:45.361443 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.361970 kubelet[2940]: E0123 18:32:45.361648 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.361970 kubelet[2940]: W0123 18:32:45.361654 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.361970 kubelet[2940]: E0123 18:32:45.361661 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.361970 kubelet[2940]: E0123 18:32:45.361884 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.361970 kubelet[2940]: W0123 18:32:45.361900 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.361970 kubelet[2940]: E0123 18:32:45.361918 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.362953 kubelet[2940]: E0123 18:32:45.362869 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.362953 kubelet[2940]: W0123 18:32:45.362879 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.362953 kubelet[2940]: E0123 18:32:45.362887 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.363125 kubelet[2940]: E0123 18:32:45.363048 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.363125 kubelet[2940]: W0123 18:32:45.363054 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.363125 kubelet[2940]: E0123 18:32:45.363061 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.363211 kubelet[2940]: E0123 18:32:45.363203 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.363236 kubelet[2940]: W0123 18:32:45.363211 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.363236 kubelet[2940]: E0123 18:32:45.363217 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.363508 kubelet[2940]: E0123 18:32:45.363432 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.363508 kubelet[2940]: W0123 18:32:45.363438 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.363508 kubelet[2940]: E0123 18:32:45.363445 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.363644 kubelet[2940]: E0123 18:32:45.363633 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.363644 kubelet[2940]: W0123 18:32:45.363642 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.363691 kubelet[2940]: E0123 18:32:45.363648 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.363918 kubelet[2940]: E0123 18:32:45.363908 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.363918 kubelet[2940]: W0123 18:32:45.363918 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.363971 kubelet[2940]: E0123 18:32:45.363925 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.365038 kubelet[2940]: E0123 18:32:45.364910 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.365038 kubelet[2940]: W0123 18:32:45.364925 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.365038 kubelet[2940]: E0123 18:32:45.364935 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.366056 kubelet[2940]: E0123 18:32:45.365548 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.366056 kubelet[2940]: W0123 18:32:45.365560 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.366056 kubelet[2940]: E0123 18:32:45.365571 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.366056 kubelet[2940]: E0123 18:32:45.366054 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.366303 kubelet[2940]: W0123 18:32:45.366062 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.366303 kubelet[2940]: E0123 18:32:45.366071 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.366303 kubelet[2940]: E0123 18:32:45.366271 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.366303 kubelet[2940]: W0123 18:32:45.366278 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.366303 kubelet[2940]: E0123 18:32:45.366285 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.366879 kubelet[2940]: E0123 18:32:45.366556 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.366879 kubelet[2940]: W0123 18:32:45.366563 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.366879 kubelet[2940]: E0123 18:32:45.366571 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.367034 kubelet[2940]: E0123 18:32:45.367022 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.367034 kubelet[2940]: W0123 18:32:45.367033 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.367079 kubelet[2940]: E0123 18:32:45.367042 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.367729 kubelet[2940]: E0123 18:32:45.367647 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.367890 kubelet[2940]: W0123 18:32:45.367728 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.367929 kubelet[2940]: E0123 18:32:45.367894 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.370630 kubelet[2940]: E0123 18:32:45.370236 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.370630 kubelet[2940]: W0123 18:32:45.370247 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.370630 kubelet[2940]: E0123 18:32:45.370257 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.371225 kubelet[2940]: E0123 18:32:45.371152 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.371225 kubelet[2940]: W0123 18:32:45.371160 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.371225 kubelet[2940]: E0123 18:32:45.371167 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.371945 kubelet[2940]: E0123 18:32:45.371873 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.371945 kubelet[2940]: W0123 18:32:45.371883 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.371945 kubelet[2940]: E0123 18:32:45.371892 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.372543 kubelet[2940]: E0123 18:32:45.372522 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.372543 kubelet[2940]: W0123 18:32:45.372531 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.372543 kubelet[2940]: E0123 18:32:45.372539 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.372935 kubelet[2940]: E0123 18:32:45.372926 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.372935 kubelet[2940]: W0123 18:32:45.372935 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.373442 kubelet[2940]: E0123 18:32:45.372942 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.374065 kubelet[2940]: E0123 18:32:45.373812 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.374065 kubelet[2940]: W0123 18:32:45.373895 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.374065 kubelet[2940]: E0123 18:32:45.373904 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.374785 kubelet[2940]: E0123 18:32:45.374403 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.374785 kubelet[2940]: W0123 18:32:45.374423 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.374785 kubelet[2940]: E0123 18:32:45.374431 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.374785 kubelet[2940]: E0123 18:32:45.374590 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.374785 kubelet[2940]: W0123 18:32:45.374596 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.374785 kubelet[2940]: E0123 18:32:45.374602 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.375112 kubelet[2940]: E0123 18:32:45.375070 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.375112 kubelet[2940]: W0123 18:32:45.375080 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.375112 kubelet[2940]: E0123 18:32:45.375087 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.384929 kubelet[2940]: E0123 18:32:45.384909 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:45.384929 kubelet[2940]: W0123 18:32:45.384927 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:45.385053 kubelet[2940]: E0123 18:32:45.384945 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:45.386025 systemd[1]: Started cri-containerd-fa6b3e5b9e93321d91cc3843698a2a43cf638a5a650b56c375974b8e937a9497.scope - libcontainer container fa6b3e5b9e93321d91cc3843698a2a43cf638a5a650b56c375974b8e937a9497. Jan 23 18:32:45.396000 audit: BPF prog-id=156 op=LOAD Jan 23 18:32:45.399000 audit: BPF prog-id=157 op=LOAD Jan 23 18:32:45.399000 audit[3480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3469 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366233653562396539333332316439316363333834333639386132 Jan 23 18:32:45.399000 audit: BPF prog-id=157 op=UNLOAD Jan 23 18:32:45.399000 audit[3480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3469 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366233653562396539333332316439316363333834333639386132 Jan 23 18:32:45.399000 audit: BPF prog-id=158 op=LOAD Jan 23 18:32:45.399000 audit[3480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3469 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366233653562396539333332316439316363333834333639386132 Jan 23 18:32:45.399000 audit: BPF prog-id=159 op=LOAD Jan 23 18:32:45.399000 audit[3480]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3469 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366233653562396539333332316439316363333834333639386132 Jan 23 18:32:45.399000 audit: BPF prog-id=159 op=UNLOAD Jan 23 18:32:45.399000 audit[3480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3469 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366233653562396539333332316439316363333834333639386132 Jan 23 18:32:45.399000 audit: BPF prog-id=158 op=UNLOAD Jan 23 18:32:45.399000 audit[3480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3469 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366233653562396539333332316439316363333834333639386132 Jan 23 18:32:45.399000 audit: BPF prog-id=160 op=LOAD Jan 23 18:32:45.399000 audit[3480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3469 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:45.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661366233653562396539333332316439316363333834333639386132 Jan 23 18:32:45.424674 containerd[1695]: time="2026-01-23T18:32:45.424628048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vb4p4,Uid:801abf44-1cab-42bb-b518-0325722a32d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa6b3e5b9e93321d91cc3843698a2a43cf638a5a650b56c375974b8e937a9497\"" Jan 23 18:32:46.708388 kubelet[2940]: E0123 18:32:46.708341 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:32:46.783455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926515699.mount: Deactivated successfully. Jan 23 18:32:47.329461 containerd[1695]: time="2026-01-23T18:32:47.328155436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:47.334327 containerd[1695]: time="2026-01-23T18:32:47.331620569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 23 18:32:47.334327 containerd[1695]: time="2026-01-23T18:32:47.333977566Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:47.337699 containerd[1695]: time="2026-01-23T18:32:47.337666267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:47.338069 containerd[1695]: time="2026-01-23T18:32:47.338044007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.997554028s" Jan 23 18:32:47.338201 containerd[1695]: time="2026-01-23T18:32:47.338073132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 18:32:47.339729 containerd[1695]: time="2026-01-23T18:32:47.339700730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 18:32:47.350205 containerd[1695]: time="2026-01-23T18:32:47.350167174Z" level=info msg="CreateContainer within sandbox \"b98036c6af5b9e99f3940820384177ec56cdc88ee34fe36fb3ea88e8ac2e730b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 18:32:47.367848 containerd[1695]: time="2026-01-23T18:32:47.367065474Z" level=info msg="Container 1e6f3cfba1a8f2dafed97e9983ab3d834fa883bb786381e6eb5f35ed69d174b2: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:32:47.372757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount567860667.mount: Deactivated successfully. Jan 23 18:32:47.382937 containerd[1695]: time="2026-01-23T18:32:47.382797764Z" level=info msg="CreateContainer within sandbox \"b98036c6af5b9e99f3940820384177ec56cdc88ee34fe36fb3ea88e8ac2e730b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1e6f3cfba1a8f2dafed97e9983ab3d834fa883bb786381e6eb5f35ed69d174b2\"" Jan 23 18:32:47.383857 containerd[1695]: time="2026-01-23T18:32:47.383810874Z" level=info msg="StartContainer for \"1e6f3cfba1a8f2dafed97e9983ab3d834fa883bb786381e6eb5f35ed69d174b2\"" Jan 23 18:32:47.386242 containerd[1695]: time="2026-01-23T18:32:47.386211944Z" level=info msg="connecting to shim 1e6f3cfba1a8f2dafed97e9983ab3d834fa883bb786381e6eb5f35ed69d174b2" address="unix:///run/containerd/s/41f4009afeadabb58cf22c182b73f84a714ceb949c44e6c80e3741ff32492a44" protocol=ttrpc version=3 Jan 23 18:32:47.413085 systemd[1]: Started cri-containerd-1e6f3cfba1a8f2dafed97e9983ab3d834fa883bb786381e6eb5f35ed69d174b2.scope - libcontainer container 1e6f3cfba1a8f2dafed97e9983ab3d834fa883bb786381e6eb5f35ed69d174b2. Jan 23 18:32:47.427000 audit: BPF prog-id=161 op=LOAD Jan 23 18:32:47.428000 audit: BPF prog-id=162 op=LOAD Jan 23 18:32:47.428000 audit[3543]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3379 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:47.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165366633636662613161386632646166656439376539393833616233 Jan 23 18:32:47.428000 audit: BPF prog-id=162 op=UNLOAD Jan 23 18:32:47.428000 audit[3543]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3379 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:47.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165366633636662613161386632646166656439376539393833616233 Jan 23 18:32:47.428000 audit: BPF prog-id=163 op=LOAD Jan 23 18:32:47.428000 audit[3543]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3379 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:47.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165366633636662613161386632646166656439376539393833616233 Jan 23 18:32:47.428000 audit: BPF prog-id=164 op=LOAD Jan 23 18:32:47.428000 audit[3543]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3379 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:47.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165366633636662613161386632646166656439376539393833616233 Jan 23 18:32:47.428000 audit: BPF prog-id=164 op=UNLOAD Jan 23 18:32:47.428000 audit[3543]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3379 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:47.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165366633636662613161386632646166656439376539393833616233 Jan 23 18:32:47.428000 audit: BPF prog-id=163 op=UNLOAD Jan 23 18:32:47.428000 audit[3543]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3379 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:47.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165366633636662613161386632646166656439376539393833616233 Jan 23 18:32:47.428000 audit: BPF prog-id=165 op=LOAD Jan 23 18:32:47.428000 audit[3543]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3379 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:47.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165366633636662613161386632646166656439376539393833616233 Jan 23 18:32:47.474805 containerd[1695]: time="2026-01-23T18:32:47.474762959Z" level=info msg="StartContainer for \"1e6f3cfba1a8f2dafed97e9983ab3d834fa883bb786381e6eb5f35ed69d174b2\" returns successfully" Jan 23 18:32:47.834124 kubelet[2940]: I0123 18:32:47.833751 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-675bffb7-5z52p" podStartSLOduration=1.8335912859999999 podStartE2EDuration="3.833732829s" podCreationTimestamp="2026-01-23 18:32:44 +0000 UTC" firstStartedPulling="2026-01-23 18:32:45.338781861 +0000 UTC m=+21.723552841" lastFinishedPulling="2026-01-23 18:32:47.338923403 +0000 UTC m=+23.723694384" observedRunningTime="2026-01-23 18:32:47.832977043 +0000 UTC m=+24.217748046" watchObservedRunningTime="2026-01-23 18:32:47.833732829 +0000 UTC m=+24.218503824" Jan 23 18:32:47.869857 kubelet[2940]: E0123 18:32:47.869647 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.869857 kubelet[2940]: W0123 18:32:47.869666 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.869857 kubelet[2940]: E0123 18:32:47.869682 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.870615 kubelet[2940]: E0123 18:32:47.870598 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.870646 kubelet[2940]: W0123 18:32:47.870623 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.870646 kubelet[2940]: E0123 18:32:47.870637 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.870851 kubelet[2940]: E0123 18:32:47.870838 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.870851 kubelet[2940]: W0123 18:32:47.870848 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.870895 kubelet[2940]: E0123 18:32:47.870855 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.871090 kubelet[2940]: E0123 18:32:47.871067 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.871090 kubelet[2940]: W0123 18:32:47.871077 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.871090 kubelet[2940]: E0123 18:32:47.871083 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.871372 kubelet[2940]: E0123 18:32:47.871361 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.871372 kubelet[2940]: W0123 18:32:47.871371 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.871425 kubelet[2940]: E0123 18:32:47.871379 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.871593 kubelet[2940]: E0123 18:32:47.871581 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.871593 kubelet[2940]: W0123 18:32:47.871591 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.871654 kubelet[2940]: E0123 18:32:47.871599 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.871894 kubelet[2940]: E0123 18:32:47.871870 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.871894 kubelet[2940]: W0123 18:32:47.871881 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.871894 kubelet[2940]: E0123 18:32:47.871889 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.872266 kubelet[2940]: E0123 18:32:47.872248 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.872266 kubelet[2940]: W0123 18:32:47.872259 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.872407 kubelet[2940]: E0123 18:32:47.872269 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.872544 kubelet[2940]: E0123 18:32:47.872530 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.872544 kubelet[2940]: W0123 18:32:47.872541 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.872592 kubelet[2940]: E0123 18:32:47.872548 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.872701 kubelet[2940]: E0123 18:32:47.872686 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.872701 kubelet[2940]: W0123 18:32:47.872696 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.872701 kubelet[2940]: E0123 18:32:47.872702 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.872857 kubelet[2940]: E0123 18:32:47.872845 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.872857 kubelet[2940]: W0123 18:32:47.872854 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.872901 kubelet[2940]: E0123 18:32:47.872860 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.873010 kubelet[2940]: E0123 18:32:47.872995 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.873010 kubelet[2940]: W0123 18:32:47.873004 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.873010 kubelet[2940]: E0123 18:32:47.873009 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.873206 kubelet[2940]: E0123 18:32:47.873157 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.873206 kubelet[2940]: W0123 18:32:47.873167 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.873206 kubelet[2940]: E0123 18:32:47.873172 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.873309 kubelet[2940]: E0123 18:32:47.873297 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.873309 kubelet[2940]: W0123 18:32:47.873307 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.873349 kubelet[2940]: E0123 18:32:47.873313 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.873547 kubelet[2940]: E0123 18:32:47.873533 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.873547 kubelet[2940]: W0123 18:32:47.873543 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.873595 kubelet[2940]: E0123 18:32:47.873550 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.883223 kubelet[2940]: E0123 18:32:47.883072 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.883223 kubelet[2940]: W0123 18:32:47.883112 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.883223 kubelet[2940]: E0123 18:32:47.883132 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.883401 kubelet[2940]: E0123 18:32:47.883334 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.883401 kubelet[2940]: W0123 18:32:47.883341 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.883401 kubelet[2940]: E0123 18:32:47.883348 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.883549 kubelet[2940]: E0123 18:32:47.883506 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.883549 kubelet[2940]: W0123 18:32:47.883515 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.883549 kubelet[2940]: E0123 18:32:47.883521 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.883709 kubelet[2940]: E0123 18:32:47.883673 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.883709 kubelet[2940]: W0123 18:32:47.883682 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.883709 kubelet[2940]: E0123 18:32:47.883688 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.883930 kubelet[2940]: E0123 18:32:47.883884 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.883930 kubelet[2940]: W0123 18:32:47.883893 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.883930 kubelet[2940]: E0123 18:32:47.883899 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.884097 kubelet[2940]: E0123 18:32:47.884060 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.884097 kubelet[2940]: W0123 18:32:47.884068 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.884097 kubelet[2940]: E0123 18:32:47.884074 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.884252 kubelet[2940]: E0123 18:32:47.884214 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.884252 kubelet[2940]: W0123 18:32:47.884219 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.884252 kubelet[2940]: E0123 18:32:47.884225 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.884579 kubelet[2940]: E0123 18:32:47.884538 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.884579 kubelet[2940]: W0123 18:32:47.884549 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.884579 kubelet[2940]: E0123 18:32:47.884556 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.884722 kubelet[2940]: E0123 18:32:47.884687 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.884722 kubelet[2940]: W0123 18:32:47.884692 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.884722 kubelet[2940]: E0123 18:32:47.884698 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.884924 kubelet[2940]: E0123 18:32:47.884843 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.884924 kubelet[2940]: W0123 18:32:47.884849 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.884924 kubelet[2940]: E0123 18:32:47.884855 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.885218 kubelet[2940]: E0123 18:32:47.885192 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.885218 kubelet[2940]: W0123 18:32:47.885215 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.885265 kubelet[2940]: E0123 18:32:47.885235 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.885488 kubelet[2940]: E0123 18:32:47.885474 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.885592 kubelet[2940]: W0123 18:32:47.885500 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.885592 kubelet[2940]: E0123 18:32:47.885509 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.885700 kubelet[2940]: E0123 18:32:47.885689 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.885700 kubelet[2940]: W0123 18:32:47.885698 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.885868 kubelet[2940]: E0123 18:32:47.885705 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.886277 kubelet[2940]: E0123 18:32:47.886221 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.886959 kubelet[2940]: W0123 18:32:47.886939 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.886959 kubelet[2940]: E0123 18:32:47.886956 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.887224 kubelet[2940]: E0123 18:32:47.887211 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.887224 kubelet[2940]: W0123 18:32:47.887221 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.887271 kubelet[2940]: E0123 18:32:47.887228 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.887424 kubelet[2940]: E0123 18:32:47.887412 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.887458 kubelet[2940]: W0123 18:32:47.887434 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.887458 kubelet[2940]: E0123 18:32:47.887441 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.887653 kubelet[2940]: E0123 18:32:47.887638 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.887653 kubelet[2940]: W0123 18:32:47.887650 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.887706 kubelet[2940]: E0123 18:32:47.887656 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:47.888160 kubelet[2940]: E0123 18:32:47.888145 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:32:47.888160 kubelet[2940]: W0123 18:32:47.888157 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:32:47.888205 kubelet[2940]: E0123 18:32:47.888167 2940 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:32:48.690810 containerd[1695]: time="2026-01-23T18:32:48.690714411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:48.692849 containerd[1695]: time="2026-01-23T18:32:48.692806656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 23 18:32:48.694441 containerd[1695]: time="2026-01-23T18:32:48.694409089Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:48.697373 containerd[1695]: time="2026-01-23T18:32:48.697169826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:48.697682 containerd[1695]: time="2026-01-23T18:32:48.697662500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.357923151s" Jan 23 18:32:48.697714 containerd[1695]: time="2026-01-23T18:32:48.697688523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 18:32:48.702274 containerd[1695]: time="2026-01-23T18:32:48.702240670Z" level=info msg="CreateContainer within sandbox \"fa6b3e5b9e93321d91cc3843698a2a43cf638a5a650b56c375974b8e937a9497\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 18:32:48.708609 kubelet[2940]: E0123 18:32:48.708572 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:32:48.716008 containerd[1695]: time="2026-01-23T18:32:48.715954419Z" level=info msg="Container 29eb410b8713556e2c843e9a455806d37ec7300fdc8192829b362c5b6681dd23: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:32:48.720505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368656251.mount: Deactivated successfully. Jan 23 18:32:48.729499 containerd[1695]: time="2026-01-23T18:32:48.729472703Z" level=info msg="CreateContainer within sandbox \"fa6b3e5b9e93321d91cc3843698a2a43cf638a5a650b56c375974b8e937a9497\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"29eb410b8713556e2c843e9a455806d37ec7300fdc8192829b362c5b6681dd23\"" Jan 23 18:32:48.730345 containerd[1695]: time="2026-01-23T18:32:48.730246624Z" level=info msg="StartContainer for \"29eb410b8713556e2c843e9a455806d37ec7300fdc8192829b362c5b6681dd23\"" Jan 23 18:32:48.732727 containerd[1695]: time="2026-01-23T18:32:48.732149783Z" level=info msg="connecting to shim 29eb410b8713556e2c843e9a455806d37ec7300fdc8192829b362c5b6681dd23" address="unix:///run/containerd/s/dfe0d25e7dc68f7332626192e755d850a6858145d14c99310fca5c5a77742c44" protocol=ttrpc version=3 Jan 23 18:32:48.754994 systemd[1]: Started cri-containerd-29eb410b8713556e2c843e9a455806d37ec7300fdc8192829b362c5b6681dd23.scope - libcontainer container 29eb410b8713556e2c843e9a455806d37ec7300fdc8192829b362c5b6681dd23. Jan 23 18:32:48.801000 audit: BPF prog-id=166 op=LOAD Jan 23 18:32:48.801000 audit[3618]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3469 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:48.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239656234313062383731333535366532633834336539613435353830 Jan 23 18:32:48.802000 audit: BPF prog-id=167 op=LOAD Jan 23 18:32:48.802000 audit[3618]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3469 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:48.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239656234313062383731333535366532633834336539613435353830 Jan 23 18:32:48.802000 audit: BPF prog-id=167 op=UNLOAD Jan 23 18:32:48.802000 audit[3618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3469 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:48.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239656234313062383731333535366532633834336539613435353830 Jan 23 18:32:48.802000 audit: BPF prog-id=166 op=UNLOAD Jan 23 18:32:48.802000 audit[3618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3469 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:48.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239656234313062383731333535366532633834336539613435353830 Jan 23 18:32:48.802000 audit: BPF prog-id=168 op=LOAD Jan 23 18:32:48.802000 audit[3618]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3469 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:48.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239656234313062383731333535366532633834336539613435353830 Jan 23 18:32:48.830200 kubelet[2940]: I0123 18:32:48.829791 2940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:32:48.831837 containerd[1695]: time="2026-01-23T18:32:48.831532624Z" level=info msg="StartContainer for \"29eb410b8713556e2c843e9a455806d37ec7300fdc8192829b362c5b6681dd23\" returns successfully" Jan 23 18:32:48.838967 systemd[1]: cri-containerd-29eb410b8713556e2c843e9a455806d37ec7300fdc8192829b362c5b6681dd23.scope: Deactivated successfully. Jan 23 18:32:48.840000 audit: BPF prog-id=168 op=UNLOAD Jan 23 18:32:48.843828 containerd[1695]: time="2026-01-23T18:32:48.843788006Z" level=info msg="received container exit event container_id:\"29eb410b8713556e2c843e9a455806d37ec7300fdc8192829b362c5b6681dd23\" id:\"29eb410b8713556e2c843e9a455806d37ec7300fdc8192829b362c5b6681dd23\" pid:3632 exited_at:{seconds:1769193168 nanos:843303677}" Jan 23 18:32:48.864141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29eb410b8713556e2c843e9a455806d37ec7300fdc8192829b362c5b6681dd23-rootfs.mount: Deactivated successfully. Jan 23 18:32:49.843357 containerd[1695]: time="2026-01-23T18:32:49.843071246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 18:32:50.709882 kubelet[2940]: E0123 18:32:50.708923 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:32:52.407393 containerd[1695]: time="2026-01-23T18:32:52.407328013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:52.410569 containerd[1695]: time="2026-01-23T18:32:52.410540269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 23 18:32:52.412060 containerd[1695]: time="2026-01-23T18:32:52.412030326Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:52.415004 containerd[1695]: time="2026-01-23T18:32:52.414955181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:52.415772 containerd[1695]: time="2026-01-23T18:32:52.415400471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.570975206s" Jan 23 18:32:52.415772 containerd[1695]: time="2026-01-23T18:32:52.415424592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 18:32:52.422493 containerd[1695]: time="2026-01-23T18:32:52.422464566Z" level=info msg="CreateContainer within sandbox \"fa6b3e5b9e93321d91cc3843698a2a43cf638a5a650b56c375974b8e937a9497\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:32:52.440276 containerd[1695]: time="2026-01-23T18:32:52.437853070Z" level=info msg="Container 9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:32:52.463281 containerd[1695]: time="2026-01-23T18:32:52.463250075Z" level=info msg="CreateContainer within sandbox \"fa6b3e5b9e93321d91cc3843698a2a43cf638a5a650b56c375974b8e937a9497\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0\"" Jan 23 18:32:52.464877 containerd[1695]: time="2026-01-23T18:32:52.464843564Z" level=info msg="StartContainer for \"9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0\"" Jan 23 18:32:52.466937 containerd[1695]: time="2026-01-23T18:32:52.466913867Z" level=info msg="connecting to shim 9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0" address="unix:///run/containerd/s/dfe0d25e7dc68f7332626192e755d850a6858145d14c99310fca5c5a77742c44" protocol=ttrpc version=3 Jan 23 18:32:52.498075 systemd[1]: Started cri-containerd-9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0.scope - libcontainer container 9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0. Jan 23 18:32:52.535000 audit: BPF prog-id=169 op=LOAD Jan 23 18:32:52.538164 kernel: kauditd_printk_skb: 78 callbacks suppressed Jan 23 18:32:52.538218 kernel: audit: type=1334 audit(1769193172.535:551): prog-id=169 op=LOAD Jan 23 18:32:52.535000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3469 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:52.541465 kernel: audit: type=1300 audit(1769193172.535:551): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3469 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:52.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353861356363363363313164326461666237623836626530346265 Jan 23 18:32:52.535000 audit: BPF prog-id=170 op=LOAD Jan 23 18:32:52.548980 kernel: audit: type=1327 audit(1769193172.535:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353861356363363363313164326461666237623836626530346265 Jan 23 18:32:52.549030 kernel: audit: type=1334 audit(1769193172.535:552): prog-id=170 op=LOAD Jan 23 18:32:52.535000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3469 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:52.555828 kernel: audit: type=1300 audit(1769193172.535:552): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3469 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:52.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353861356363363363313164326461666237623836626530346265 Jan 23 18:32:52.561991 kernel: audit: type=1327 audit(1769193172.535:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353861356363363363313164326461666237623836626530346265 Jan 23 18:32:52.535000 audit: BPF prog-id=170 op=UNLOAD Jan 23 18:32:52.535000 audit[3678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3469 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:52.565703 kernel: audit: type=1334 audit(1769193172.535:553): prog-id=170 op=UNLOAD Jan 23 18:32:52.565868 kernel: audit: type=1300 audit(1769193172.535:553): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3469 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:52.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353861356363363363313164326461666237623836626530346265 Jan 23 18:32:52.535000 audit: BPF prog-id=169 op=UNLOAD Jan 23 18:32:52.573102 kernel: audit: type=1327 audit(1769193172.535:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353861356363363363313164326461666237623836626530346265 Jan 23 18:32:52.573149 kernel: audit: type=1334 audit(1769193172.535:554): prog-id=169 op=UNLOAD Jan 23 18:32:52.535000 audit[3678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3469 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:52.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353861356363363363313164326461666237623836626530346265 Jan 23 18:32:52.535000 audit: BPF prog-id=171 op=LOAD Jan 23 18:32:52.535000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3469 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:52.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353861356363363363313164326461666237623836626530346265 Jan 23 18:32:52.578380 containerd[1695]: time="2026-01-23T18:32:52.578297266Z" level=info msg="StartContainer for \"9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0\" returns successfully" Jan 23 18:32:52.709338 kubelet[2940]: E0123 18:32:52.708086 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:32:53.059131 containerd[1695]: time="2026-01-23T18:32:53.058967706Z" level=info msg="received container exit event container_id:\"9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0\" id:\"9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0\" pid:3693 exited_at:{seconds:1769193173 nanos:58627372}" Jan 23 18:32:53.059037 systemd[1]: cri-containerd-9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0.scope: Deactivated successfully. Jan 23 18:32:53.059534 systemd[1]: cri-containerd-9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0.scope: Consumed 437ms CPU time, 193.8M memory peak, 171.3M written to disk. Jan 23 18:32:53.060000 audit: BPF prog-id=171 op=UNLOAD Jan 23 18:32:53.080714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f58a5cc63c11d2dafb7b86be04be504d2748e8cbe18f519586f2c95dc7eadf0-rootfs.mount: Deactivated successfully. Jan 23 18:32:53.130887 kubelet[2940]: I0123 18:32:53.130865 2940 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 18:32:53.178891 systemd[1]: Created slice kubepods-burstable-podbffd8927_96ef_4cdf_bf10_cb3549db7c56.slice - libcontainer container kubepods-burstable-podbffd8927_96ef_4cdf_bf10_cb3549db7c56.slice. Jan 23 18:32:53.188846 systemd[1]: Created slice kubepods-besteffort-podfbf77e94_7d63_4cf9_9744_b692622d727e.slice - libcontainer container kubepods-besteffort-podfbf77e94_7d63_4cf9_9744_b692622d727e.slice. Jan 23 18:32:53.196489 systemd[1]: Created slice kubepods-burstable-pod67c7d306_b5c4_44ef_b037_2a94e6f9e21a.slice - libcontainer container kubepods-burstable-pod67c7d306_b5c4_44ef_b037_2a94e6f9e21a.slice. Jan 23 18:32:53.207081 systemd[1]: Created slice kubepods-besteffort-pod2ae423e7_492d_4f77_ad52_275afa909708.slice - libcontainer container kubepods-besteffort-pod2ae423e7_492d_4f77_ad52_275afa909708.slice. Jan 23 18:32:53.214268 systemd[1]: Created slice kubepods-besteffort-podd1b9504d_be7a_4b41_b198_d33537aa128d.slice - libcontainer container kubepods-besteffort-podd1b9504d_be7a_4b41_b198_d33537aa128d.slice. Jan 23 18:32:53.222768 systemd[1]: Created slice kubepods-besteffort-podd7423672_957c_488d_baee_8a9e9c290e13.slice - libcontainer container kubepods-besteffort-podd7423672_957c_488d_baee_8a9e9c290e13.slice. Jan 23 18:32:53.226790 kubelet[2940]: I0123 18:32:53.226762 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d7423672-957c-488d-baee-8a9e9c290e13-calico-apiserver-certs\") pod \"calico-apiserver-67759dc977-nhm59\" (UID: \"d7423672-957c-488d-baee-8a9e9c290e13\") " pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" Jan 23 18:32:53.226790 kubelet[2940]: I0123 18:32:53.226791 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-whisker-backend-key-pair\") pod \"whisker-6c969d66b-kr5ff\" (UID: \"cc0640ac-fea8-465c-8ec0-aaa03f5b2c96\") " pod="calico-system/whisker-6c969d66b-kr5ff" Jan 23 18:32:53.226914 kubelet[2940]: I0123 18:32:53.226808 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jft49\" (UniqueName: \"kubernetes.io/projected/2ae423e7-492d-4f77-ad52-275afa909708-kube-api-access-jft49\") pod \"calico-apiserver-67759dc977-rb8xc\" (UID: \"2ae423e7-492d-4f77-ad52-275afa909708\") " pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" Jan 23 18:32:53.226914 kubelet[2940]: I0123 18:32:53.226833 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1b9504d-be7a-4b41-b198-d33537aa128d-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-xmb4s\" (UID: \"d1b9504d-be7a-4b41-b198-d33537aa128d\") " pod="calico-system/goldmane-7c778bb748-xmb4s" Jan 23 18:32:53.226914 kubelet[2940]: I0123 18:32:53.226849 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbfxc\" (UniqueName: \"kubernetes.io/projected/bffd8927-96ef-4cdf-bf10-cb3549db7c56-kube-api-access-kbfxc\") pod \"coredns-66bc5c9577-5xlg5\" (UID: \"bffd8927-96ef-4cdf-bf10-cb3549db7c56\") " pod="kube-system/coredns-66bc5c9577-5xlg5" Jan 23 18:32:53.226914 kubelet[2940]: I0123 18:32:53.226863 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c7d306-b5c4-44ef-b037-2a94e6f9e21a-config-volume\") pod \"coredns-66bc5c9577-ff9vl\" (UID: \"67c7d306-b5c4-44ef-b037-2a94e6f9e21a\") " pod="kube-system/coredns-66bc5c9577-ff9vl" Jan 23 18:32:53.226914 kubelet[2940]: I0123 18:32:53.226879 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ae423e7-492d-4f77-ad52-275afa909708-calico-apiserver-certs\") pod \"calico-apiserver-67759dc977-rb8xc\" (UID: \"2ae423e7-492d-4f77-ad52-275afa909708\") " pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" Jan 23 18:32:53.227038 kubelet[2940]: I0123 18:32:53.226894 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b9504d-be7a-4b41-b198-d33537aa128d-config\") pod \"goldmane-7c778bb748-xmb4s\" (UID: \"d1b9504d-be7a-4b41-b198-d33537aa128d\") " pod="calico-system/goldmane-7c778bb748-xmb4s" Jan 23 18:32:53.227038 kubelet[2940]: I0123 18:32:53.226908 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d1b9504d-be7a-4b41-b198-d33537aa128d-goldmane-key-pair\") pod \"goldmane-7c778bb748-xmb4s\" (UID: \"d1b9504d-be7a-4b41-b198-d33537aa128d\") " pod="calico-system/goldmane-7c778bb748-xmb4s" Jan 23 18:32:53.227038 kubelet[2940]: I0123 18:32:53.226924 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbf77e94-7d63-4cf9-9744-b692622d727e-tigera-ca-bundle\") pod \"calico-kube-controllers-65674f688d-kxr2f\" (UID: \"fbf77e94-7d63-4cf9-9744-b692622d727e\") " pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" Jan 23 18:32:53.227038 kubelet[2940]: I0123 18:32:53.226937 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld5fd\" (UniqueName: \"kubernetes.io/projected/fbf77e94-7d63-4cf9-9744-b692622d727e-kube-api-access-ld5fd\") pod \"calico-kube-controllers-65674f688d-kxr2f\" (UID: \"fbf77e94-7d63-4cf9-9744-b692622d727e\") " pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" Jan 23 18:32:53.227038 kubelet[2940]: I0123 18:32:53.226953 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bffd8927-96ef-4cdf-bf10-cb3549db7c56-config-volume\") pod \"coredns-66bc5c9577-5xlg5\" (UID: \"bffd8927-96ef-4cdf-bf10-cb3549db7c56\") " pod="kube-system/coredns-66bc5c9577-5xlg5" Jan 23 18:32:53.227148 kubelet[2940]: I0123 18:32:53.226969 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-whisker-ca-bundle\") pod \"whisker-6c969d66b-kr5ff\" (UID: \"cc0640ac-fea8-465c-8ec0-aaa03f5b2c96\") " pod="calico-system/whisker-6c969d66b-kr5ff" Jan 23 18:32:53.227148 kubelet[2940]: I0123 18:32:53.226982 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9knf\" (UniqueName: \"kubernetes.io/projected/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-kube-api-access-t9knf\") pod \"whisker-6c969d66b-kr5ff\" (UID: \"cc0640ac-fea8-465c-8ec0-aaa03f5b2c96\") " pod="calico-system/whisker-6c969d66b-kr5ff" Jan 23 18:32:53.227148 kubelet[2940]: I0123 18:32:53.227009 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b96dh\" (UniqueName: \"kubernetes.io/projected/d7423672-957c-488d-baee-8a9e9c290e13-kube-api-access-b96dh\") pod \"calico-apiserver-67759dc977-nhm59\" (UID: \"d7423672-957c-488d-baee-8a9e9c290e13\") " pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" Jan 23 18:32:53.227148 kubelet[2940]: I0123 18:32:53.227024 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtk7\" (UniqueName: \"kubernetes.io/projected/d1b9504d-be7a-4b41-b198-d33537aa128d-kube-api-access-2mtk7\") pod \"goldmane-7c778bb748-xmb4s\" (UID: \"d1b9504d-be7a-4b41-b198-d33537aa128d\") " pod="calico-system/goldmane-7c778bb748-xmb4s" Jan 23 18:32:53.227148 kubelet[2940]: I0123 18:32:53.227038 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2ncf\" (UniqueName: \"kubernetes.io/projected/67c7d306-b5c4-44ef-b037-2a94e6f9e21a-kube-api-access-q2ncf\") pod \"coredns-66bc5c9577-ff9vl\" (UID: \"67c7d306-b5c4-44ef-b037-2a94e6f9e21a\") " pod="kube-system/coredns-66bc5c9577-ff9vl" Jan 23 18:32:53.227496 systemd[1]: Created slice kubepods-besteffort-podcc0640ac_fea8_465c_8ec0_aaa03f5b2c96.slice - libcontainer container kubepods-besteffort-podcc0640ac_fea8_465c_8ec0_aaa03f5b2c96.slice. Jan 23 18:32:53.489220 containerd[1695]: time="2026-01-23T18:32:53.489183719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5xlg5,Uid:bffd8927-96ef-4cdf-bf10-cb3549db7c56,Namespace:kube-system,Attempt:0,}" Jan 23 18:32:53.499667 containerd[1695]: time="2026-01-23T18:32:53.499639846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65674f688d-kxr2f,Uid:fbf77e94-7d63-4cf9-9744-b692622d727e,Namespace:calico-system,Attempt:0,}" Jan 23 18:32:53.508985 containerd[1695]: time="2026-01-23T18:32:53.508840829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ff9vl,Uid:67c7d306-b5c4-44ef-b037-2a94e6f9e21a,Namespace:kube-system,Attempt:0,}" Jan 23 18:32:53.514612 containerd[1695]: time="2026-01-23T18:32:53.514582858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67759dc977-rb8xc,Uid:2ae423e7-492d-4f77-ad52-275afa909708,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:32:53.523709 containerd[1695]: time="2026-01-23T18:32:53.523612729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xmb4s,Uid:d1b9504d-be7a-4b41-b198-d33537aa128d,Namespace:calico-system,Attempt:0,}" Jan 23 18:32:53.530122 containerd[1695]: time="2026-01-23T18:32:53.530079550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67759dc977-nhm59,Uid:d7423672-957c-488d-baee-8a9e9c290e13,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:32:53.531795 containerd[1695]: time="2026-01-23T18:32:53.531771251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c969d66b-kr5ff,Uid:cc0640ac-fea8-465c-8ec0-aaa03f5b2c96,Namespace:calico-system,Attempt:0,}" Jan 23 18:32:53.628621 containerd[1695]: time="2026-01-23T18:32:53.628575659Z" level=error msg="Failed to destroy network for sandbox \"a8b89588c47cbc5b39384c45d3e533ab6ebb7ad487fd5813c15df6dec9f74a20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.635597 containerd[1695]: time="2026-01-23T18:32:53.635480639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5xlg5,Uid:bffd8927-96ef-4cdf-bf10-cb3549db7c56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b89588c47cbc5b39384c45d3e533ab6ebb7ad487fd5813c15df6dec9f74a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.635763 kubelet[2940]: E0123 18:32:53.635682 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b89588c47cbc5b39384c45d3e533ab6ebb7ad487fd5813c15df6dec9f74a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.635763 kubelet[2940]: E0123 18:32:53.635748 2940 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b89588c47cbc5b39384c45d3e533ab6ebb7ad487fd5813c15df6dec9f74a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5xlg5" Jan 23 18:32:53.635956 kubelet[2940]: E0123 18:32:53.635765 2940 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b89588c47cbc5b39384c45d3e533ab6ebb7ad487fd5813c15df6dec9f74a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5xlg5" Jan 23 18:32:53.636075 kubelet[2940]: E0123 18:32:53.636010 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5xlg5_kube-system(bffd8927-96ef-4cdf-bf10-cb3549db7c56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5xlg5_kube-system(bffd8927-96ef-4cdf-bf10-cb3549db7c56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8b89588c47cbc5b39384c45d3e533ab6ebb7ad487fd5813c15df6dec9f74a20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5xlg5" podUID="bffd8927-96ef-4cdf-bf10-cb3549db7c56" Jan 23 18:32:53.642153 containerd[1695]: time="2026-01-23T18:32:53.642051511Z" level=error msg="Failed to destroy network for sandbox \"4366dbb56145576116026567d60214cefa7aa8120e27564b9bcdfe895547deb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.649001 containerd[1695]: time="2026-01-23T18:32:53.648959241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67759dc977-rb8xc,Uid:2ae423e7-492d-4f77-ad52-275afa909708,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4366dbb56145576116026567d60214cefa7aa8120e27564b9bcdfe895547deb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.649304 kubelet[2940]: E0123 18:32:53.649271 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4366dbb56145576116026567d60214cefa7aa8120e27564b9bcdfe895547deb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.649355 kubelet[2940]: E0123 18:32:53.649332 2940 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4366dbb56145576116026567d60214cefa7aa8120e27564b9bcdfe895547deb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" Jan 23 18:32:53.649389 kubelet[2940]: E0123 18:32:53.649351 2940 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4366dbb56145576116026567d60214cefa7aa8120e27564b9bcdfe895547deb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" Jan 23 18:32:53.649437 kubelet[2940]: E0123 18:32:53.649417 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67759dc977-rb8xc_calico-apiserver(2ae423e7-492d-4f77-ad52-275afa909708)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67759dc977-rb8xc_calico-apiserver(2ae423e7-492d-4f77-ad52-275afa909708)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4366dbb56145576116026567d60214cefa7aa8120e27564b9bcdfe895547deb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:32:53.663661 containerd[1695]: time="2026-01-23T18:32:53.663506158Z" level=error msg="Failed to destroy network for sandbox \"87c6e9f81824cbf8ac6a1918c88f718f29f2c557491e4457b85cb76fc9648306\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.667942 containerd[1695]: time="2026-01-23T18:32:53.667887433Z" level=error msg="Failed to destroy network for sandbox \"572b6c25bb7b14948c775edda8217f7c01d40ef6945179240c15d10af8434235\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.668227 containerd[1695]: time="2026-01-23T18:32:53.668167675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ff9vl,Uid:67c7d306-b5c4-44ef-b037-2a94e6f9e21a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c6e9f81824cbf8ac6a1918c88f718f29f2c557491e4457b85cb76fc9648306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.668378 kubelet[2940]: E0123 18:32:53.668348 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c6e9f81824cbf8ac6a1918c88f718f29f2c557491e4457b85cb76fc9648306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.668457 kubelet[2940]: E0123 18:32:53.668398 2940 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c6e9f81824cbf8ac6a1918c88f718f29f2c557491e4457b85cb76fc9648306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ff9vl" Jan 23 18:32:53.668457 kubelet[2940]: E0123 18:32:53.668417 2940 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c6e9f81824cbf8ac6a1918c88f718f29f2c557491e4457b85cb76fc9648306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ff9vl" Jan 23 18:32:53.668591 kubelet[2940]: E0123 18:32:53.668462 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ff9vl_kube-system(67c7d306-b5c4-44ef-b037-2a94e6f9e21a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ff9vl_kube-system(67c7d306-b5c4-44ef-b037-2a94e6f9e21a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87c6e9f81824cbf8ac6a1918c88f718f29f2c557491e4457b85cb76fc9648306\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ff9vl" podUID="67c7d306-b5c4-44ef-b037-2a94e6f9e21a" Jan 23 18:32:53.673611 containerd[1695]: time="2026-01-23T18:32:53.673474847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65674f688d-kxr2f,Uid:fbf77e94-7d63-4cf9-9744-b692622d727e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"572b6c25bb7b14948c775edda8217f7c01d40ef6945179240c15d10af8434235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.673709 kubelet[2940]: E0123 18:32:53.673636 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"572b6c25bb7b14948c775edda8217f7c01d40ef6945179240c15d10af8434235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.673709 kubelet[2940]: E0123 18:32:53.673673 2940 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"572b6c25bb7b14948c775edda8217f7c01d40ef6945179240c15d10af8434235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" Jan 23 18:32:53.673709 kubelet[2940]: E0123 18:32:53.673689 2940 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"572b6c25bb7b14948c775edda8217f7c01d40ef6945179240c15d10af8434235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" Jan 23 18:32:53.673810 kubelet[2940]: E0123 18:32:53.673742 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65674f688d-kxr2f_calico-system(fbf77e94-7d63-4cf9-9744-b692622d727e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65674f688d-kxr2f_calico-system(fbf77e94-7d63-4cf9-9744-b692622d727e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"572b6c25bb7b14948c775edda8217f7c01d40ef6945179240c15d10af8434235\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:32:53.682523 containerd[1695]: time="2026-01-23T18:32:53.682483242Z" level=error msg="Failed to destroy network for sandbox \"af43a2be852bfd017e4c9b0e245ff9646aaaf734be15e25203ed693e4c20df5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.687794 containerd[1695]: time="2026-01-23T18:32:53.687763524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xmb4s,Uid:d1b9504d-be7a-4b41-b198-d33537aa128d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af43a2be852bfd017e4c9b0e245ff9646aaaf734be15e25203ed693e4c20df5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.688088 kubelet[2940]: E0123 18:32:53.688006 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af43a2be852bfd017e4c9b0e245ff9646aaaf734be15e25203ed693e4c20df5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.688088 kubelet[2940]: E0123 18:32:53.688063 2940 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af43a2be852bfd017e4c9b0e245ff9646aaaf734be15e25203ed693e4c20df5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xmb4s" Jan 23 18:32:53.688177 kubelet[2940]: E0123 18:32:53.688089 2940 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af43a2be852bfd017e4c9b0e245ff9646aaaf734be15e25203ed693e4c20df5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xmb4s" Jan 23 18:32:53.688177 kubelet[2940]: E0123 18:32:53.688148 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-xmb4s_calico-system(d1b9504d-be7a-4b41-b198-d33537aa128d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-xmb4s_calico-system(d1b9504d-be7a-4b41-b198-d33537aa128d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af43a2be852bfd017e4c9b0e245ff9646aaaf734be15e25203ed693e4c20df5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:32:53.698990 containerd[1695]: time="2026-01-23T18:32:53.698922552Z" level=error msg="Failed to destroy network for sandbox \"917238648ae04b4044fcdb240f8b95f33570acb6778f7bfce2c585e655b84d83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.702730 containerd[1695]: time="2026-01-23T18:32:53.702617121Z" level=error msg="Failed to destroy network for sandbox \"401148179006d9ea39a68e64faef355f46abe9a844d30a02189786825252c745\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.703601 containerd[1695]: time="2026-01-23T18:32:53.703577019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67759dc977-nhm59,Uid:d7423672-957c-488d-baee-8a9e9c290e13,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"917238648ae04b4044fcdb240f8b95f33570acb6778f7bfce2c585e655b84d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.703893 kubelet[2940]: E0123 18:32:53.703865 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"917238648ae04b4044fcdb240f8b95f33570acb6778f7bfce2c585e655b84d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.703967 kubelet[2940]: E0123 18:32:53.703912 2940 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"917238648ae04b4044fcdb240f8b95f33570acb6778f7bfce2c585e655b84d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" Jan 23 18:32:53.703967 kubelet[2940]: E0123 18:32:53.703930 2940 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"917238648ae04b4044fcdb240f8b95f33570acb6778f7bfce2c585e655b84d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" Jan 23 18:32:53.704025 kubelet[2940]: E0123 18:32:53.703984 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67759dc977-nhm59_calico-apiserver(d7423672-957c-488d-baee-8a9e9c290e13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67759dc977-nhm59_calico-apiserver(d7423672-957c-488d-baee-8a9e9c290e13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"917238648ae04b4044fcdb240f8b95f33570acb6778f7bfce2c585e655b84d83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:32:53.706913 containerd[1695]: time="2026-01-23T18:32:53.706877835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c969d66b-kr5ff,Uid:cc0640ac-fea8-465c-8ec0-aaa03f5b2c96,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"401148179006d9ea39a68e64faef355f46abe9a844d30a02189786825252c745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.707097 kubelet[2940]: E0123 18:32:53.707062 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"401148179006d9ea39a68e64faef355f46abe9a844d30a02189786825252c745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:53.707190 kubelet[2940]: E0123 18:32:53.707177 2940 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"401148179006d9ea39a68e64faef355f46abe9a844d30a02189786825252c745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c969d66b-kr5ff" Jan 23 18:32:53.707304 kubelet[2940]: E0123 18:32:53.707236 2940 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"401148179006d9ea39a68e64faef355f46abe9a844d30a02189786825252c745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c969d66b-kr5ff" Jan 23 18:32:53.707304 kubelet[2940]: E0123 18:32:53.707281 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c969d66b-kr5ff_calico-system(cc0640ac-fea8-465c-8ec0-aaa03f5b2c96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c969d66b-kr5ff_calico-system(cc0640ac-fea8-465c-8ec0-aaa03f5b2c96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"401148179006d9ea39a68e64faef355f46abe9a844d30a02189786825252c745\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c969d66b-kr5ff" podUID="cc0640ac-fea8-465c-8ec0-aaa03f5b2c96" Jan 23 18:32:53.874003 containerd[1695]: time="2026-01-23T18:32:53.873225189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 18:32:54.441625 systemd[1]: run-netns-cni\x2d3ce13574\x2d4158\x2d8a8b\x2d4c30\x2dbbf11eb9e697.mount: Deactivated successfully. Jan 23 18:32:54.441752 systemd[1]: run-netns-cni\x2dab4605c5\x2d1512\x2deb1e\x2d62bb\x2d9218d61a664c.mount: Deactivated successfully. Jan 23 18:32:54.441837 systemd[1]: run-netns-cni\x2d26e22c6f\x2d345e\x2d7306\x2d64b5\x2d6af911ca70bd.mount: Deactivated successfully. Jan 23 18:32:54.441902 systemd[1]: run-netns-cni\x2d3ddbaabc\x2d671b\x2d1633\x2d32c5\x2d1b4df72c91bf.mount: Deactivated successfully. Jan 23 18:32:54.718399 systemd[1]: Created slice kubepods-besteffort-pod1bbaf1bf_0602_4b75_8639_4ec842393e67.slice - libcontainer container kubepods-besteffort-pod1bbaf1bf_0602_4b75_8639_4ec842393e67.slice. Jan 23 18:32:54.860121 containerd[1695]: time="2026-01-23T18:32:54.860030843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4bbc,Uid:1bbaf1bf-0602-4b75-8639-4ec842393e67,Namespace:calico-system,Attempt:0,}" Jan 23 18:32:54.952906 containerd[1695]: time="2026-01-23T18:32:54.952851162Z" level=error msg="Failed to destroy network for sandbox \"27511a3264bc45069a1fa4978e762cafd4fd8e46b56a95c527caaa5ac0560cd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:54.955450 systemd[1]: run-netns-cni\x2de49f517c\x2d37ff\x2d18fa\x2dacbc\x2dd0e18a37e9ec.mount: Deactivated successfully. Jan 23 18:32:54.962870 containerd[1695]: time="2026-01-23T18:32:54.962790841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4bbc,Uid:1bbaf1bf-0602-4b75-8639-4ec842393e67,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27511a3264bc45069a1fa4978e762cafd4fd8e46b56a95c527caaa5ac0560cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:54.963095 kubelet[2940]: E0123 18:32:54.963053 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27511a3264bc45069a1fa4978e762cafd4fd8e46b56a95c527caaa5ac0560cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:32:54.963536 kubelet[2940]: E0123 18:32:54.963100 2940 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27511a3264bc45069a1fa4978e762cafd4fd8e46b56a95c527caaa5ac0560cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p4bbc" Jan 23 18:32:54.963536 kubelet[2940]: E0123 18:32:54.963118 2940 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27511a3264bc45069a1fa4978e762cafd4fd8e46b56a95c527caaa5ac0560cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p4bbc" Jan 23 18:32:54.963536 kubelet[2940]: E0123 18:32:54.963167 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27511a3264bc45069a1fa4978e762cafd4fd8e46b56a95c527caaa5ac0560cd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:32:58.480452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount50477800.mount: Deactivated successfully. Jan 23 18:32:58.504988 containerd[1695]: time="2026-01-23T18:32:58.504935762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:58.507132 containerd[1695]: time="2026-01-23T18:32:58.507105873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880766" Jan 23 18:32:58.509220 containerd[1695]: time="2026-01-23T18:32:58.509186256Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:58.512033 containerd[1695]: time="2026-01-23T18:32:58.511990284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:32:58.512766 containerd[1695]: time="2026-01-23T18:32:58.512545999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.63927372s" Jan 23 18:32:58.512766 containerd[1695]: time="2026-01-23T18:32:58.512583704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 18:32:58.534164 containerd[1695]: time="2026-01-23T18:32:58.534118929Z" level=info msg="CreateContainer within sandbox \"fa6b3e5b9e93321d91cc3843698a2a43cf638a5a650b56c375974b8e937a9497\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 18:32:58.569619 containerd[1695]: time="2026-01-23T18:32:58.569326569Z" level=info msg="Container 9ffea743ddc5d6ba3ab26e1fdb59adb963bd48fc810f1d4cb16061ecdd2c9c7c: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:32:58.592730 containerd[1695]: time="2026-01-23T18:32:58.592690789Z" level=info msg="CreateContainer within sandbox \"fa6b3e5b9e93321d91cc3843698a2a43cf638a5a650b56c375974b8e937a9497\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9ffea743ddc5d6ba3ab26e1fdb59adb963bd48fc810f1d4cb16061ecdd2c9c7c\"" Jan 23 18:32:58.594843 containerd[1695]: time="2026-01-23T18:32:58.594035793Z" level=info msg="StartContainer for \"9ffea743ddc5d6ba3ab26e1fdb59adb963bd48fc810f1d4cb16061ecdd2c9c7c\"" Jan 23 18:32:58.595406 containerd[1695]: time="2026-01-23T18:32:58.595387931Z" level=info msg="connecting to shim 9ffea743ddc5d6ba3ab26e1fdb59adb963bd48fc810f1d4cb16061ecdd2c9c7c" address="unix:///run/containerd/s/dfe0d25e7dc68f7332626192e755d850a6858145d14c99310fca5c5a77742c44" protocol=ttrpc version=3 Jan 23 18:32:58.647999 systemd[1]: Started cri-containerd-9ffea743ddc5d6ba3ab26e1fdb59adb963bd48fc810f1d4cb16061ecdd2c9c7c.scope - libcontainer container 9ffea743ddc5d6ba3ab26e1fdb59adb963bd48fc810f1d4cb16061ecdd2c9c7c. Jan 23 18:32:58.708469 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 23 18:32:58.708576 kernel: audit: type=1334 audit(1769193178.706:557): prog-id=172 op=LOAD Jan 23 18:32:58.706000 audit: BPF prog-id=172 op=LOAD Jan 23 18:32:58.706000 audit[3948]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3469 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:58.712078 kernel: audit: type=1300 audit(1769193178.706:557): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3469 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:58.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966666561373433646463356436626133616232366531666462353961 Jan 23 18:32:58.716345 kernel: audit: type=1327 audit(1769193178.706:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966666561373433646463356436626133616232366531666462353961 Jan 23 18:32:58.706000 audit: BPF prog-id=173 op=LOAD Jan 23 18:32:58.719330 kernel: audit: type=1334 audit(1769193178.706:558): prog-id=173 op=LOAD Jan 23 18:32:58.706000 audit[3948]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3469 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:58.722026 kernel: audit: type=1300 audit(1769193178.706:558): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3469 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:58.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966666561373433646463356436626133616232366531666462353961 Jan 23 18:32:58.726169 kernel: audit: type=1327 audit(1769193178.706:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966666561373433646463356436626133616232366531666462353961 Jan 23 18:32:58.706000 audit: BPF prog-id=173 op=UNLOAD Jan 23 18:32:58.729135 kernel: audit: type=1334 audit(1769193178.706:559): prog-id=173 op=UNLOAD Jan 23 18:32:58.706000 audit[3948]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3469 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:58.731284 kernel: audit: type=1300 audit(1769193178.706:559): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3469 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:58.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966666561373433646463356436626133616232366531666462353961 Jan 23 18:32:58.735962 kernel: audit: type=1327 audit(1769193178.706:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966666561373433646463356436626133616232366531666462353961 Jan 23 18:32:58.738339 kernel: audit: type=1334 audit(1769193178.706:560): prog-id=172 op=UNLOAD Jan 23 18:32:58.706000 audit: BPF prog-id=172 op=UNLOAD Jan 23 18:32:58.706000 audit[3948]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3469 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:58.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966666561373433646463356436626133616232366531666462353961 Jan 23 18:32:58.706000 audit: BPF prog-id=174 op=LOAD Jan 23 18:32:58.706000 audit[3948]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3469 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:32:58.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966666561373433646463356436626133616232366531666462353961 Jan 23 18:32:58.749158 containerd[1695]: time="2026-01-23T18:32:58.749124744Z" level=info msg="StartContainer for \"9ffea743ddc5d6ba3ab26e1fdb59adb963bd48fc810f1d4cb16061ecdd2c9c7c\" returns successfully" Jan 23 18:32:58.843453 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 18:32:58.843576 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 18:32:58.896759 kubelet[2940]: I0123 18:32:58.896308 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vb4p4" podStartSLOduration=1.80902629 podStartE2EDuration="14.896293871s" podCreationTimestamp="2026-01-23 18:32:44 +0000 UTC" firstStartedPulling="2026-01-23 18:32:45.426156131 +0000 UTC m=+21.810927112" lastFinishedPulling="2026-01-23 18:32:58.513423712 +0000 UTC m=+34.898194693" observedRunningTime="2026-01-23 18:32:58.895527285 +0000 UTC m=+35.280298287" watchObservedRunningTime="2026-01-23 18:32:58.896293871 +0000 UTC m=+35.281064869" Jan 23 18:32:59.071720 kubelet[2940]: I0123 18:32:59.071131 2940 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-whisker-backend-key-pair\") pod \"cc0640ac-fea8-465c-8ec0-aaa03f5b2c96\" (UID: \"cc0640ac-fea8-465c-8ec0-aaa03f5b2c96\") " Jan 23 18:32:59.071720 kubelet[2940]: I0123 18:32:59.071171 2940 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9knf\" (UniqueName: \"kubernetes.io/projected/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-kube-api-access-t9knf\") pod \"cc0640ac-fea8-465c-8ec0-aaa03f5b2c96\" (UID: \"cc0640ac-fea8-465c-8ec0-aaa03f5b2c96\") " Jan 23 18:32:59.071720 kubelet[2940]: I0123 18:32:59.071201 2940 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-whisker-ca-bundle\") pod \"cc0640ac-fea8-465c-8ec0-aaa03f5b2c96\" (UID: \"cc0640ac-fea8-465c-8ec0-aaa03f5b2c96\") " Jan 23 18:32:59.072732 kubelet[2940]: I0123 18:32:59.072697 2940 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cc0640ac-fea8-465c-8ec0-aaa03f5b2c96" (UID: "cc0640ac-fea8-465c-8ec0-aaa03f5b2c96"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:32:59.076503 kubelet[2940]: I0123 18:32:59.076424 2940 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-kube-api-access-t9knf" (OuterVolumeSpecName: "kube-api-access-t9knf") pod "cc0640ac-fea8-465c-8ec0-aaa03f5b2c96" (UID: "cc0640ac-fea8-465c-8ec0-aaa03f5b2c96"). InnerVolumeSpecName "kube-api-access-t9knf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:32:59.076747 kubelet[2940]: I0123 18:32:59.076626 2940 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cc0640ac-fea8-465c-8ec0-aaa03f5b2c96" (UID: "cc0640ac-fea8-465c-8ec0-aaa03f5b2c96"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:32:59.172018 kubelet[2940]: I0123 18:32:59.171948 2940 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-whisker-backend-key-pair\") on node \"ci-4547-1-0-1-5b0cac0ed6\" DevicePath \"\"" Jan 23 18:32:59.172186 kubelet[2940]: I0123 18:32:59.171999 2940 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t9knf\" (UniqueName: \"kubernetes.io/projected/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-kube-api-access-t9knf\") on node \"ci-4547-1-0-1-5b0cac0ed6\" DevicePath \"\"" Jan 23 18:32:59.172186 kubelet[2940]: I0123 18:32:59.172169 2940 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96-whisker-ca-bundle\") on node \"ci-4547-1-0-1-5b0cac0ed6\" DevicePath \"\"" Jan 23 18:32:59.482353 systemd[1]: var-lib-kubelet-pods-cc0640ac\x2dfea8\x2d465c\x2d8ec0\x2daaa03f5b2c96-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt9knf.mount: Deactivated successfully. Jan 23 18:32:59.482479 systemd[1]: var-lib-kubelet-pods-cc0640ac\x2dfea8\x2d465c\x2d8ec0\x2daaa03f5b2c96-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 18:32:59.730543 systemd[1]: Removed slice kubepods-besteffort-podcc0640ac_fea8_465c_8ec0_aaa03f5b2c96.slice - libcontainer container kubepods-besteffort-podcc0640ac_fea8_465c_8ec0_aaa03f5b2c96.slice. Jan 23 18:32:59.884321 kubelet[2940]: I0123 18:32:59.884279 2940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:32:59.970676 systemd[1]: Created slice kubepods-besteffort-pod61d5f304_fb8b_48e8_ae8d_711deece6e7e.slice - libcontainer container kubepods-besteffort-pod61d5f304_fb8b_48e8_ae8d_711deece6e7e.slice. Jan 23 18:33:00.078215 kubelet[2940]: I0123 18:33:00.078140 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61d5f304-fb8b-48e8-ae8d-711deece6e7e-whisker-ca-bundle\") pod \"whisker-74f9f4c8f6-267pf\" (UID: \"61d5f304-fb8b-48e8-ae8d-711deece6e7e\") " pod="calico-system/whisker-74f9f4c8f6-267pf" Jan 23 18:33:00.078215 kubelet[2940]: I0123 18:33:00.078182 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61d5f304-fb8b-48e8-ae8d-711deece6e7e-whisker-backend-key-pair\") pod \"whisker-74f9f4c8f6-267pf\" (UID: \"61d5f304-fb8b-48e8-ae8d-711deece6e7e\") " pod="calico-system/whisker-74f9f4c8f6-267pf" Jan 23 18:33:00.078610 kubelet[2940]: I0123 18:33:00.078197 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2f4q\" (UniqueName: \"kubernetes.io/projected/61d5f304-fb8b-48e8-ae8d-711deece6e7e-kube-api-access-s2f4q\") pod \"whisker-74f9f4c8f6-267pf\" (UID: \"61d5f304-fb8b-48e8-ae8d-711deece6e7e\") " pod="calico-system/whisker-74f9f4c8f6-267pf" Jan 23 18:33:00.279827 containerd[1695]: time="2026-01-23T18:33:00.279737121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74f9f4c8f6-267pf,Uid:61d5f304-fb8b-48e8-ae8d-711deece6e7e,Namespace:calico-system,Attempt:0,}" Jan 23 18:33:00.498052 systemd-networkd[1576]: cali257f79d8b91: Link UP Jan 23 18:33:00.498508 systemd-networkd[1576]: cali257f79d8b91: Gained carrier Jan 23 18:33:00.513407 containerd[1695]: 2026-01-23 18:33:00.322 [INFO][4146] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:33:00.513407 containerd[1695]: 2026-01-23 18:33:00.432 [INFO][4146] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0 whisker-74f9f4c8f6- calico-system 61d5f304-fb8b-48e8-ae8d-711deece6e7e 866 0 2026-01-23 18:32:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74f9f4c8f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547-1-0-1-5b0cac0ed6 whisker-74f9f4c8f6-267pf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali257f79d8b91 [] [] }} ContainerID="1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" Namespace="calico-system" Pod="whisker-74f9f4c8f6-267pf" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-" Jan 23 18:33:00.513407 containerd[1695]: 2026-01-23 18:33:00.432 [INFO][4146] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" Namespace="calico-system" Pod="whisker-74f9f4c8f6-267pf" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0" Jan 23 18:33:00.513407 containerd[1695]: 2026-01-23 18:33:00.455 [INFO][4165] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" HandleID="k8s-pod-network.1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0" Jan 23 18:33:00.513606 containerd[1695]: 2026-01-23 18:33:00.456 [INFO][4165] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" HandleID="k8s-pod-network.1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-1-0-1-5b0cac0ed6", "pod":"whisker-74f9f4c8f6-267pf", "timestamp":"2026-01-23 18:33:00.455919671 +0000 UTC"}, Hostname:"ci-4547-1-0-1-5b0cac0ed6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:33:00.513606 containerd[1695]: 2026-01-23 18:33:00.456 [INFO][4165] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:33:00.513606 containerd[1695]: 2026-01-23 18:33:00.456 [INFO][4165] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:33:00.513606 containerd[1695]: 2026-01-23 18:33:00.456 [INFO][4165] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-1-5b0cac0ed6' Jan 23 18:33:00.513606 containerd[1695]: 2026-01-23 18:33:00.462 [INFO][4165] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:00.513606 containerd[1695]: 2026-01-23 18:33:00.465 [INFO][4165] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:00.513606 containerd[1695]: 2026-01-23 18:33:00.469 [INFO][4165] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:00.513606 containerd[1695]: 2026-01-23 18:33:00.471 [INFO][4165] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:00.513606 containerd[1695]: 2026-01-23 18:33:00.473 [INFO][4165] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:00.514002 containerd[1695]: 2026-01-23 18:33:00.473 [INFO][4165] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:00.514002 containerd[1695]: 2026-01-23 18:33:00.474 [INFO][4165] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e Jan 23 18:33:00.514002 containerd[1695]: 2026-01-23 18:33:00.478 [INFO][4165] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:00.514002 containerd[1695]: 2026-01-23 18:33:00.482 [INFO][4165] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.65/26] block=192.168.123.64/26 handle="k8s-pod-network.1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:00.514002 containerd[1695]: 2026-01-23 18:33:00.482 [INFO][4165] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.65/26] handle="k8s-pod-network.1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:00.514002 containerd[1695]: 2026-01-23 18:33:00.482 [INFO][4165] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:33:00.514002 containerd[1695]: 2026-01-23 18:33:00.483 [INFO][4165] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.65/26] IPv6=[] ContainerID="1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" HandleID="k8s-pod-network.1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0" Jan 23 18:33:00.514154 containerd[1695]: 2026-01-23 18:33:00.486 [INFO][4146] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" Namespace="calico-system" Pod="whisker-74f9f4c8f6-267pf" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0", GenerateName:"whisker-74f9f4c8f6-", Namespace:"calico-system", SelfLink:"", UID:"61d5f304-fb8b-48e8-ae8d-711deece6e7e", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74f9f4c8f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"", Pod:"whisker-74f9f4c8f6-267pf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali257f79d8b91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:00.514154 containerd[1695]: 2026-01-23 18:33:00.486 [INFO][4146] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.65/32] ContainerID="1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" Namespace="calico-system" Pod="whisker-74f9f4c8f6-267pf" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0" Jan 23 18:33:00.514239 containerd[1695]: 2026-01-23 18:33:00.486 [INFO][4146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali257f79d8b91 ContainerID="1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" Namespace="calico-system" Pod="whisker-74f9f4c8f6-267pf" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0" Jan 23 18:33:00.514239 containerd[1695]: 2026-01-23 18:33:00.500 [INFO][4146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" Namespace="calico-system" Pod="whisker-74f9f4c8f6-267pf" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0" Jan 23 18:33:00.514280 containerd[1695]: 2026-01-23 18:33:00.500 [INFO][4146] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" Namespace="calico-system" Pod="whisker-74f9f4c8f6-267pf" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0", GenerateName:"whisker-74f9f4c8f6-", Namespace:"calico-system", SelfLink:"", UID:"61d5f304-fb8b-48e8-ae8d-711deece6e7e", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74f9f4c8f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e", Pod:"whisker-74f9f4c8f6-267pf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali257f79d8b91", MAC:"be:07:be:0f:33:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:00.514347 containerd[1695]: 2026-01-23 18:33:00.511 [INFO][4146] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" Namespace="calico-system" Pod="whisker-74f9f4c8f6-267pf" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-whisker--74f9f4c8f6--267pf-eth0" Jan 23 18:33:00.561301 containerd[1695]: time="2026-01-23T18:33:00.561200046Z" level=info msg="connecting to shim 1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e" address="unix:///run/containerd/s/6ed666d37f19cd5187ffaa45235b03bde53fad523043c970761d0f6b2ffac6ab" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:33:00.585988 systemd[1]: Started cri-containerd-1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e.scope - libcontainer container 1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e. Jan 23 18:33:00.594000 audit: BPF prog-id=175 op=LOAD Jan 23 18:33:00.595000 audit: BPF prog-id=176 op=LOAD Jan 23 18:33:00.595000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:00.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653535653730636630343437373666373165333565343537366636 Jan 23 18:33:00.595000 audit: BPF prog-id=176 op=UNLOAD Jan 23 18:33:00.595000 audit[4197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:00.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653535653730636630343437373666373165333565343537366636 Jan 23 18:33:00.595000 audit: BPF prog-id=177 op=LOAD Jan 23 18:33:00.595000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:00.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653535653730636630343437373666373165333565343537366636 Jan 23 18:33:00.595000 audit: BPF prog-id=178 op=LOAD Jan 23 18:33:00.595000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:00.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653535653730636630343437373666373165333565343537366636 Jan 23 18:33:00.595000 audit: BPF prog-id=178 op=UNLOAD Jan 23 18:33:00.595000 audit[4197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:00.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653535653730636630343437373666373165333565343537366636 Jan 23 18:33:00.595000 audit: BPF prog-id=177 op=UNLOAD Jan 23 18:33:00.595000 audit[4197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:00.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653535653730636630343437373666373165333565343537366636 Jan 23 18:33:00.595000 audit: BPF prog-id=179 op=LOAD Jan 23 18:33:00.595000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:00.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163653535653730636630343437373666373165333565343537366636 Jan 23 18:33:00.629231 containerd[1695]: time="2026-01-23T18:33:00.629194790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74f9f4c8f6-267pf,Uid:61d5f304-fb8b-48e8-ae8d-711deece6e7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ce55e70cf044776f71e35e4576f6fa1bb7cc98c9d9c270636b03a6c8635914e\"" Jan 23 18:33:00.631082 containerd[1695]: time="2026-01-23T18:33:00.631062303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:33:00.973304 containerd[1695]: time="2026-01-23T18:33:00.973228993Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:00.975419 containerd[1695]: time="2026-01-23T18:33:00.975296157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:33:00.976034 containerd[1695]: time="2026-01-23T18:33:00.975359335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:00.976714 kubelet[2940]: E0123 18:33:00.975928 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:33:00.976714 kubelet[2940]: E0123 18:33:00.975987 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:33:00.976714 kubelet[2940]: E0123 18:33:00.976094 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f9f4c8f6-267pf_calico-system(61d5f304-fb8b-48e8-ae8d-711deece6e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:00.979657 containerd[1695]: time="2026-01-23T18:33:00.979607760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:33:01.294260 containerd[1695]: time="2026-01-23T18:33:01.293490986Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:01.295889 containerd[1695]: time="2026-01-23T18:33:01.295460689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:33:01.297249 containerd[1695]: time="2026-01-23T18:33:01.295564889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:01.297460 kubelet[2940]: E0123 18:33:01.296310 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:33:01.297460 kubelet[2940]: E0123 18:33:01.296385 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:33:01.297460 kubelet[2940]: E0123 18:33:01.296525 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f9f4c8f6-267pf_calico-system(61d5f304-fb8b-48e8-ae8d-711deece6e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:01.297460 kubelet[2940]: E0123 18:33:01.296611 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:33:01.602169 systemd-networkd[1576]: cali257f79d8b91: Gained IPv6LL Jan 23 18:33:01.712481 kubelet[2940]: I0123 18:33:01.712313 2940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc0640ac-fea8-465c-8ec0-aaa03f5b2c96" path="/var/lib/kubelet/pods/cc0640ac-fea8-465c-8ec0-aaa03f5b2c96/volumes" Jan 23 18:33:01.896139 kubelet[2940]: E0123 18:33:01.895682 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:33:01.966000 audit[4276]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=4276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:01.966000 audit[4276]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffec79a3ab0 a2=0 a3=7ffec79a3a9c items=0 ppid=3094 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:01.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:01.976000 audit[4276]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=4276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:01.976000 audit[4276]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec79a3ab0 a2=0 a3=0 items=0 ppid=3094 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:01.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:04.716577 containerd[1695]: time="2026-01-23T18:33:04.716428629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67759dc977-nhm59,Uid:d7423672-957c-488d-baee-8a9e9c290e13,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:33:04.718416 containerd[1695]: time="2026-01-23T18:33:04.718386966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5xlg5,Uid:bffd8927-96ef-4cdf-bf10-cb3549db7c56,Namespace:kube-system,Attempt:0,}" Jan 23 18:33:04.884374 systemd-networkd[1576]: calie15a66b829f: Link UP Jan 23 18:33:04.885916 systemd-networkd[1576]: calie15a66b829f: Gained carrier Jan 23 18:33:04.902884 containerd[1695]: 2026-01-23 18:33:04.774 [INFO][4337] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:33:04.902884 containerd[1695]: 2026-01-23 18:33:04.787 [INFO][4337] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0 coredns-66bc5c9577- kube-system bffd8927-96ef-4cdf-bf10-cb3549db7c56 792 0 2026-01-23 18:32:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-1-0-1-5b0cac0ed6 coredns-66bc5c9577-5xlg5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie15a66b829f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-5xlg5" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-" Jan 23 18:33:04.902884 containerd[1695]: 2026-01-23 18:33:04.788 [INFO][4337] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-5xlg5" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0" Jan 23 18:33:04.902884 containerd[1695]: 2026-01-23 18:33:04.829 [INFO][4365] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" HandleID="k8s-pod-network.30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0" Jan 23 18:33:04.903079 containerd[1695]: 2026-01-23 18:33:04.829 [INFO][4365] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" HandleID="k8s-pod-network.30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5e30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-1-0-1-5b0cac0ed6", "pod":"coredns-66bc5c9577-5xlg5", "timestamp":"2026-01-23 18:33:04.82912869 +0000 UTC"}, Hostname:"ci-4547-1-0-1-5b0cac0ed6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:33:04.903079 containerd[1695]: 2026-01-23 18:33:04.829 [INFO][4365] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:33:04.903079 containerd[1695]: 2026-01-23 18:33:04.829 [INFO][4365] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:33:04.903079 containerd[1695]: 2026-01-23 18:33:04.830 [INFO][4365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-1-5b0cac0ed6' Jan 23 18:33:04.903079 containerd[1695]: 2026-01-23 18:33:04.839 [INFO][4365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:04.903079 containerd[1695]: 2026-01-23 18:33:04.845 [INFO][4365] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:04.903079 containerd[1695]: 2026-01-23 18:33:04.851 [INFO][4365] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:04.903079 containerd[1695]: 2026-01-23 18:33:04.853 [INFO][4365] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:04.903079 containerd[1695]: 2026-01-23 18:33:04.855 [INFO][4365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:04.903261 containerd[1695]: 2026-01-23 18:33:04.855 [INFO][4365] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:04.903261 containerd[1695]: 2026-01-23 18:33:04.857 [INFO][4365] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2 Jan 23 18:33:04.903261 containerd[1695]: 2026-01-23 18:33:04.862 [INFO][4365] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:04.903261 containerd[1695]: 2026-01-23 18:33:04.869 [INFO][4365] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.66/26] block=192.168.123.64/26 handle="k8s-pod-network.30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:04.903261 containerd[1695]: 2026-01-23 18:33:04.869 [INFO][4365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.66/26] handle="k8s-pod-network.30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:04.903261 containerd[1695]: 2026-01-23 18:33:04.869 [INFO][4365] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:33:04.903261 containerd[1695]: 2026-01-23 18:33:04.869 [INFO][4365] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.66/26] IPv6=[] ContainerID="30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" HandleID="k8s-pod-network.30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0" Jan 23 18:33:04.903394 containerd[1695]: 2026-01-23 18:33:04.872 [INFO][4337] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-5xlg5" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bffd8927-96ef-4cdf-bf10-cb3549db7c56", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"", Pod:"coredns-66bc5c9577-5xlg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie15a66b829f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:04.903394 containerd[1695]: 2026-01-23 18:33:04.872 [INFO][4337] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.66/32] ContainerID="30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-5xlg5" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0" Jan 23 18:33:04.903394 containerd[1695]: 2026-01-23 18:33:04.873 [INFO][4337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie15a66b829f ContainerID="30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-5xlg5" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0" Jan 23 18:33:04.903394 containerd[1695]: 2026-01-23 18:33:04.886 [INFO][4337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-5xlg5" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0" Jan 23 18:33:04.903394 containerd[1695]: 2026-01-23 18:33:04.886 [INFO][4337] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-5xlg5" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bffd8927-96ef-4cdf-bf10-cb3549db7c56", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2", Pod:"coredns-66bc5c9577-5xlg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie15a66b829f", MAC:"42:31:81:09:3e:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:04.903886 containerd[1695]: 2026-01-23 18:33:04.901 [INFO][4337] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" Namespace="kube-system" Pod="coredns-66bc5c9577-5xlg5" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--5xlg5-eth0" Jan 23 18:33:04.931447 containerd[1695]: time="2026-01-23T18:33:04.931413461Z" level=info msg="connecting to shim 30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2" address="unix:///run/containerd/s/37dccf3c11c744c9e52bedb7329561fc697dab8a4cdd42fad34180e6b7ebc0f2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:33:04.957009 systemd[1]: Started cri-containerd-30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2.scope - libcontainer container 30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2. Jan 23 18:33:04.971000 audit: BPF prog-id=180 op=LOAD Jan 23 18:33:04.973320 kernel: kauditd_printk_skb: 33 callbacks suppressed Jan 23 18:33:04.973376 kernel: audit: type=1334 audit(1769193184.971:572): prog-id=180 op=LOAD Jan 23 18:33:04.974000 audit: BPF prog-id=181 op=LOAD Jan 23 18:33:04.978833 kernel: audit: type=1334 audit(1769193184.974:573): prog-id=181 op=LOAD Jan 23 18:33:04.974000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4396 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:04.984831 kernel: audit: type=1300 audit(1769193184.974:573): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4396 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:04.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330646231326261306639366339333866366334373530363333613535 Jan 23 18:33:04.992350 kernel: audit: type=1327 audit(1769193184.974:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330646231326261306639366339333866366334373530363333613535 Jan 23 18:33:04.974000 audit: BPF prog-id=181 op=UNLOAD Jan 23 18:33:04.994215 kernel: audit: type=1334 audit(1769193184.974:574): prog-id=181 op=UNLOAD Jan 23 18:33:04.994324 systemd-networkd[1576]: cali1eff6cc97a1: Link UP Jan 23 18:33:04.994642 systemd-networkd[1576]: cali1eff6cc97a1: Gained carrier Jan 23 18:33:04.974000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4396 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:04.999832 kernel: audit: type=1300 audit(1769193184.974:574): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4396 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:04.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330646231326261306639366339333866366334373530363333613535 Jan 23 18:33:05.009827 kernel: audit: type=1327 audit(1769193184.974:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330646231326261306639366339333866366334373530363333613535 Jan 23 18:33:04.974000 audit: BPF prog-id=182 op=LOAD Jan 23 18:33:04.974000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4396 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.013243 kernel: audit: type=1334 audit(1769193184.974:575): prog-id=182 op=LOAD Jan 23 18:33:05.013280 kernel: audit: type=1300 audit(1769193184.974:575): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4396 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:04.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330646231326261306639366339333866366334373530363333613535 Jan 23 18:33:05.022915 kernel: audit: type=1327 audit(1769193184.974:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330646231326261306639366339333866366334373530363333613535 Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.786 [INFO][4333] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.801 [INFO][4333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0 calico-apiserver-67759dc977- calico-apiserver d7423672-957c-488d-baee-8a9e9c290e13 804 0 2026-01-23 18:32:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67759dc977 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-1-0-1-5b0cac0ed6 calico-apiserver-67759dc977-nhm59 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1eff6cc97a1 [] [] }} ContainerID="c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-nhm59" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.801 [INFO][4333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-nhm59" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.849 [INFO][4370] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" HandleID="k8s-pod-network.c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.850 [INFO][4370] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" HandleID="k8s-pod-network.c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f880), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-1-0-1-5b0cac0ed6", "pod":"calico-apiserver-67759dc977-nhm59", "timestamp":"2026-01-23 18:33:04.849928369 +0000 UTC"}, Hostname:"ci-4547-1-0-1-5b0cac0ed6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.850 [INFO][4370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.869 [INFO][4370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.869 [INFO][4370] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-1-5b0cac0ed6' Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.941 [INFO][4370] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.948 [INFO][4370] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.955 [INFO][4370] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.958 [INFO][4370] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.962 [INFO][4370] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.962 [INFO][4370] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.963 [INFO][4370] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3 Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.967 [INFO][4370] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.977 [INFO][4370] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.67/26] block=192.168.123.64/26 handle="k8s-pod-network.c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.977 [INFO][4370] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.67/26] handle="k8s-pod-network.c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.977 [INFO][4370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:33:05.023113 containerd[1695]: 2026-01-23 18:33:04.977 [INFO][4370] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.67/26] IPv6=[] ContainerID="c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" HandleID="k8s-pod-network.c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0" Jan 23 18:33:05.023582 containerd[1695]: 2026-01-23 18:33:04.979 [INFO][4333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-nhm59" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0", GenerateName:"calico-apiserver-67759dc977-", Namespace:"calico-apiserver", SelfLink:"", UID:"d7423672-957c-488d-baee-8a9e9c290e13", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67759dc977", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"", Pod:"calico-apiserver-67759dc977-nhm59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1eff6cc97a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:05.023582 containerd[1695]: 2026-01-23 18:33:04.980 [INFO][4333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.67/32] ContainerID="c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-nhm59" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0" Jan 23 18:33:05.023582 containerd[1695]: 2026-01-23 18:33:04.980 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1eff6cc97a1 ContainerID="c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-nhm59" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0" Jan 23 18:33:05.023582 containerd[1695]: 2026-01-23 18:33:04.994 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-nhm59" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0" Jan 23 18:33:05.023582 containerd[1695]: 2026-01-23 18:33:04.994 [INFO][4333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-nhm59" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0", GenerateName:"calico-apiserver-67759dc977-", Namespace:"calico-apiserver", SelfLink:"", UID:"d7423672-957c-488d-baee-8a9e9c290e13", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67759dc977", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3", Pod:"calico-apiserver-67759dc977-nhm59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1eff6cc97a1", MAC:"8a:35:0d:48:d2:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:05.023582 containerd[1695]: 2026-01-23 18:33:05.018 [INFO][4333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-nhm59" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--nhm59-eth0" Jan 23 18:33:04.974000 audit: BPF prog-id=183 op=LOAD Jan 23 18:33:04.974000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4396 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:04.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330646231326261306639366339333866366334373530363333613535 Jan 23 18:33:04.974000 audit: BPF prog-id=183 op=UNLOAD Jan 23 18:33:04.974000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4396 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:04.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330646231326261306639366339333866366334373530363333613535 Jan 23 18:33:04.974000 audit: BPF prog-id=182 op=UNLOAD Jan 23 18:33:04.974000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4396 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:04.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330646231326261306639366339333866366334373530363333613535 Jan 23 18:33:04.974000 audit: BPF prog-id=184 op=LOAD Jan 23 18:33:04.974000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4396 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:04.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330646231326261306639366339333866366334373530363333613535 Jan 23 18:33:05.041064 containerd[1695]: time="2026-01-23T18:33:05.040980238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5xlg5,Uid:bffd8927-96ef-4cdf-bf10-cb3549db7c56,Namespace:kube-system,Attempt:0,} returns sandbox id \"30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2\"" Jan 23 18:33:05.046307 containerd[1695]: time="2026-01-23T18:33:05.046057472Z" level=info msg="CreateContainer within sandbox \"30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:33:05.066642 containerd[1695]: time="2026-01-23T18:33:05.066607948Z" level=info msg="connecting to shim c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3" address="unix:///run/containerd/s/523f6428f1a0a2b7c7e621b243a88eb26a150706472d895922181686b4f1de5b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:33:05.083990 systemd[1]: Started cri-containerd-c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3.scope - libcontainer container c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3. Jan 23 18:33:05.094000 audit: BPF prog-id=185 op=LOAD Jan 23 18:33:05.094000 audit: BPF prog-id=186 op=LOAD Jan 23 18:33:05.095572 containerd[1695]: time="2026-01-23T18:33:05.095544735Z" level=info msg="Container 45c796aad47807508cb59b3e671d74a0dbebb652a9170879126ac7c2638ab17a: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:33:05.094000 audit[4460]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4447 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353932373237643230663565336338313963363362393131343038 Jan 23 18:33:05.095000 audit: BPF prog-id=186 op=UNLOAD Jan 23 18:33:05.095000 audit[4460]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4447 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353932373237643230663565336338313963363362393131343038 Jan 23 18:33:05.095000 audit: BPF prog-id=187 op=LOAD Jan 23 18:33:05.095000 audit[4460]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4447 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353932373237643230663565336338313963363362393131343038 Jan 23 18:33:05.095000 audit: BPF prog-id=188 op=LOAD Jan 23 18:33:05.095000 audit[4460]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4447 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353932373237643230663565336338313963363362393131343038 Jan 23 18:33:05.095000 audit: BPF prog-id=188 op=UNLOAD Jan 23 18:33:05.095000 audit[4460]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4447 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353932373237643230663565336338313963363362393131343038 Jan 23 18:33:05.096000 audit: BPF prog-id=187 op=UNLOAD Jan 23 18:33:05.096000 audit[4460]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4447 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353932373237643230663565336338313963363362393131343038 Jan 23 18:33:05.096000 audit: BPF prog-id=189 op=LOAD Jan 23 18:33:05.096000 audit[4460]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4447 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353932373237643230663565336338313963363362393131343038 Jan 23 18:33:05.109386 containerd[1695]: time="2026-01-23T18:33:05.109364185Z" level=info msg="CreateContainer within sandbox \"30db12ba0f96c938f6c4750633a5528c8e5483cf47356f75a21a88440a84e1d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45c796aad47807508cb59b3e671d74a0dbebb652a9170879126ac7c2638ab17a\"" Jan 23 18:33:05.110063 containerd[1695]: time="2026-01-23T18:33:05.109955443Z" level=info msg="StartContainer for \"45c796aad47807508cb59b3e671d74a0dbebb652a9170879126ac7c2638ab17a\"" Jan 23 18:33:05.111757 containerd[1695]: time="2026-01-23T18:33:05.111216042Z" level=info msg="connecting to shim 45c796aad47807508cb59b3e671d74a0dbebb652a9170879126ac7c2638ab17a" address="unix:///run/containerd/s/37dccf3c11c744c9e52bedb7329561fc697dab8a4cdd42fad34180e6b7ebc0f2" protocol=ttrpc version=3 Jan 23 18:33:05.127988 systemd[1]: Started cri-containerd-45c796aad47807508cb59b3e671d74a0dbebb652a9170879126ac7c2638ab17a.scope - libcontainer container 45c796aad47807508cb59b3e671d74a0dbebb652a9170879126ac7c2638ab17a. Jan 23 18:33:05.141000 audit: BPF prog-id=190 op=LOAD Jan 23 18:33:05.141000 audit: BPF prog-id=191 op=LOAD Jan 23 18:33:05.141000 audit[4480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4396 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633739366161643437383037353038636235396233653637316437 Jan 23 18:33:05.143367 containerd[1695]: time="2026-01-23T18:33:05.143344420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67759dc977-nhm59,Uid:d7423672-957c-488d-baee-8a9e9c290e13,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c9592727d20f5e3c819c63b911408cfe33133873202fa5e4baee23f660aad6c3\"" Jan 23 18:33:05.142000 audit: BPF prog-id=191 op=UNLOAD Jan 23 18:33:05.142000 audit[4480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4396 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633739366161643437383037353038636235396233653637316437 Jan 23 18:33:05.142000 audit: BPF prog-id=192 op=LOAD Jan 23 18:33:05.142000 audit[4480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4396 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633739366161643437383037353038636235396233653637316437 Jan 23 18:33:05.143000 audit: BPF prog-id=193 op=LOAD Jan 23 18:33:05.143000 audit[4480]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4396 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633739366161643437383037353038636235396233653637316437 Jan 23 18:33:05.143000 audit: BPF prog-id=193 op=UNLOAD Jan 23 18:33:05.143000 audit[4480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4396 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633739366161643437383037353038636235396233653637316437 Jan 23 18:33:05.143000 audit: BPF prog-id=192 op=UNLOAD Jan 23 18:33:05.143000 audit[4480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4396 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633739366161643437383037353038636235396233653637316437 Jan 23 18:33:05.143000 audit: BPF prog-id=194 op=LOAD Jan 23 18:33:05.143000 audit[4480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4396 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:05.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633739366161643437383037353038636235396233653637316437 Jan 23 18:33:05.145966 containerd[1695]: time="2026-01-23T18:33:05.145911619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:33:05.162387 containerd[1695]: time="2026-01-23T18:33:05.162355069Z" level=info msg="StartContainer for \"45c796aad47807508cb59b3e671d74a0dbebb652a9170879126ac7c2638ab17a\" returns successfully" Jan 23 18:33:05.487495 containerd[1695]: time="2026-01-23T18:33:05.487416819Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:05.489326 containerd[1695]: time="2026-01-23T18:33:05.489261801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:33:05.489443 containerd[1695]: time="2026-01-23T18:33:05.489396259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:05.489891 kubelet[2940]: E0123 18:33:05.489785 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:05.490569 kubelet[2940]: E0123 18:33:05.489944 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:05.490952 kubelet[2940]: E0123 18:33:05.490889 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67759dc977-nhm59_calico-apiserver(d7423672-957c-488d-baee-8a9e9c290e13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:05.491053 kubelet[2940]: E0123 18:33:05.490981 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:33:05.716648 containerd[1695]: time="2026-01-23T18:33:05.716517890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xmb4s,Uid:d1b9504d-be7a-4b41-b198-d33537aa128d,Namespace:calico-system,Attempt:0,}" Jan 23 18:33:05.719570 containerd[1695]: time="2026-01-23T18:33:05.719439727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4bbc,Uid:1bbaf1bf-0602-4b75-8639-4ec842393e67,Namespace:calico-system,Attempt:0,}" Jan 23 18:33:05.907909 systemd-networkd[1576]: cali767627f0d9e: Link UP Jan 23 18:33:05.909685 systemd-networkd[1576]: cali767627f0d9e: Gained carrier Jan 23 18:33:05.921429 kubelet[2940]: E0123 18:33:05.921400 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.804 [INFO][4533] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.822 [INFO][4533] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0 csi-node-driver- calico-system 1bbaf1bf-0602-4b75-8639-4ec842393e67 700 0 2026-01-23 18:32:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547-1-0-1-5b0cac0ed6 csi-node-driver-p4bbc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali767627f0d9e [] [] }} ContainerID="86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" Namespace="calico-system" Pod="csi-node-driver-p4bbc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.822 [INFO][4533] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" Namespace="calico-system" Pod="csi-node-driver-p4bbc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.859 [INFO][4554] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" HandleID="k8s-pod-network.86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.859 [INFO][4554] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" HandleID="k8s-pod-network.86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002aca90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-1-0-1-5b0cac0ed6", "pod":"csi-node-driver-p4bbc", "timestamp":"2026-01-23 18:33:05.859564326 +0000 UTC"}, Hostname:"ci-4547-1-0-1-5b0cac0ed6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.859 [INFO][4554] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.859 [INFO][4554] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.859 [INFO][4554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-1-5b0cac0ed6' Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.867 [INFO][4554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.871 [INFO][4554] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.877 [INFO][4554] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.879 [INFO][4554] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.882 [INFO][4554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.882 [INFO][4554] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.883 [INFO][4554] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.887 [INFO][4554] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.895 [INFO][4554] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.68/26] block=192.168.123.64/26 handle="k8s-pod-network.86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.895 [INFO][4554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.68/26] handle="k8s-pod-network.86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.895 [INFO][4554] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:33:05.934283 containerd[1695]: 2026-01-23 18:33:05.895 [INFO][4554] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.68/26] IPv6=[] ContainerID="86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" HandleID="k8s-pod-network.86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0" Jan 23 18:33:05.936140 containerd[1695]: 2026-01-23 18:33:05.898 [INFO][4533] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" Namespace="calico-system" Pod="csi-node-driver-p4bbc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1bbaf1bf-0602-4b75-8639-4ec842393e67", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"", Pod:"csi-node-driver-p4bbc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali767627f0d9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:05.936140 containerd[1695]: 2026-01-23 18:33:05.898 [INFO][4533] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.68/32] ContainerID="86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" Namespace="calico-system" Pod="csi-node-driver-p4bbc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0" Jan 23 18:33:05.936140 containerd[1695]: 2026-01-23 18:33:05.898 [INFO][4533] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali767627f0d9e ContainerID="86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" Namespace="calico-system" Pod="csi-node-driver-p4bbc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0" Jan 23 18:33:05.936140 containerd[1695]: 2026-01-23 18:33:05.908 [INFO][4533] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" Namespace="calico-system" Pod="csi-node-driver-p4bbc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0" Jan 23 18:33:05.936140 containerd[1695]: 2026-01-23 18:33:05.913 [INFO][4533] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" Namespace="calico-system" Pod="csi-node-driver-p4bbc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1bbaf1bf-0602-4b75-8639-4ec842393e67", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d", Pod:"csi-node-driver-p4bbc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali767627f0d9e", MAC:"82:95:ca:01:ae:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:05.936140 containerd[1695]: 2026-01-23 18:33:05.931 [INFO][4533] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" Namespace="calico-system" Pod="csi-node-driver-p4bbc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-csi--node--driver--p4bbc-eth0" Jan 23 18:33:05.955846 kubelet[2940]: I0123 18:33:05.955610 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5xlg5" podStartSLOduration=34.955594626 podStartE2EDuration="34.955594626s" podCreationTimestamp="2026-01-23 18:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:33:05.954782151 +0000 UTC m=+42.339553153" watchObservedRunningTime="2026-01-23 18:33:05.955594626 +0000 UTC m=+42.340365833" Jan 23 18:33:05.973379 containerd[1695]: time="2026-01-23T18:33:05.973245500Z" level=info msg="connecting to shim 86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d" address="unix:///run/containerd/s/7847339b6175c069c2a5cf9bd7d5ac412da6540dd54caf77443c56ef11c256c7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:33:06.003000 audit[4605]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:06.003000 audit[4605]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff563c6190 a2=0 a3=7fff563c617c items=0 ppid=3094 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:06.014000 audit[4605]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:06.014000 audit[4605]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff563c6190 a2=0 a3=0 items=0 ppid=3094 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:06.025103 systemd[1]: Started cri-containerd-86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d.scope - libcontainer container 86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d. Jan 23 18:33:06.038426 systemd-networkd[1576]: cali08a0610ef93: Link UP Jan 23 18:33:06.038614 systemd-networkd[1576]: cali08a0610ef93: Gained carrier Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.796 [INFO][4522] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.819 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0 goldmane-7c778bb748- calico-system d1b9504d-be7a-4b41-b198-d33537aa128d 802 0 2026-01-23 18:32:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547-1-0-1-5b0cac0ed6 goldmane-7c778bb748-xmb4s eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali08a0610ef93 [] [] }} ContainerID="b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" Namespace="calico-system" Pod="goldmane-7c778bb748-xmb4s" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.819 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" Namespace="calico-system" Pod="goldmane-7c778bb748-xmb4s" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.862 [INFO][4552] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" HandleID="k8s-pod-network.b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.862 [INFO][4552] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" HandleID="k8s-pod-network.b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-1-0-1-5b0cac0ed6", "pod":"goldmane-7c778bb748-xmb4s", "timestamp":"2026-01-23 18:33:05.862303514 +0000 UTC"}, Hostname:"ci-4547-1-0-1-5b0cac0ed6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.862 [INFO][4552] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.895 [INFO][4552] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.895 [INFO][4552] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-1-5b0cac0ed6' Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.969 [INFO][4552] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.977 [INFO][4552] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.988 [INFO][4552] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:05.995 [INFO][4552] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:06.000 [INFO][4552] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:06.000 [INFO][4552] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:06.003 [INFO][4552] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072 Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:06.014 [INFO][4552] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:06.027 [INFO][4552] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.69/26] block=192.168.123.64/26 handle="k8s-pod-network.b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:06.027 [INFO][4552] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.69/26] handle="k8s-pod-network.b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:06.027 [INFO][4552] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:33:06.060297 containerd[1695]: 2026-01-23 18:33:06.027 [INFO][4552] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.69/26] IPv6=[] ContainerID="b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" HandleID="k8s-pod-network.b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0" Jan 23 18:33:06.062364 containerd[1695]: 2026-01-23 18:33:06.031 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" Namespace="calico-system" Pod="goldmane-7c778bb748-xmb4s" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d1b9504d-be7a-4b41-b198-d33537aa128d", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"", Pod:"goldmane-7c778bb748-xmb4s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08a0610ef93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:06.062364 containerd[1695]: 2026-01-23 18:33:06.031 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.69/32] ContainerID="b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" Namespace="calico-system" Pod="goldmane-7c778bb748-xmb4s" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0" Jan 23 18:33:06.062364 containerd[1695]: 2026-01-23 18:33:06.031 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08a0610ef93 ContainerID="b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" Namespace="calico-system" Pod="goldmane-7c778bb748-xmb4s" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0" Jan 23 18:33:06.062364 containerd[1695]: 2026-01-23 18:33:06.038 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" Namespace="calico-system" Pod="goldmane-7c778bb748-xmb4s" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0" Jan 23 18:33:06.062364 containerd[1695]: 2026-01-23 18:33:06.044 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" Namespace="calico-system" Pod="goldmane-7c778bb748-xmb4s" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d1b9504d-be7a-4b41-b198-d33537aa128d", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072", Pod:"goldmane-7c778bb748-xmb4s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08a0610ef93", MAC:"16:48:f0:77:0e:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:06.062364 containerd[1695]: 2026-01-23 18:33:06.057 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" Namespace="calico-system" Pod="goldmane-7c778bb748-xmb4s" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-goldmane--7c778bb748--xmb4s-eth0" Jan 23 18:33:06.064000 audit: BPF prog-id=195 op=LOAD Jan 23 18:33:06.065000 audit: BPF prog-id=196 op=LOAD Jan 23 18:33:06.065000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4593 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646239616130313036613531633635333331383537666431633865 Jan 23 18:33:06.065000 audit: BPF prog-id=196 op=UNLOAD Jan 23 18:33:06.065000 audit[4606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4593 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646239616130313036613531633635333331383537666431633865 Jan 23 18:33:06.065000 audit: BPF prog-id=197 op=LOAD Jan 23 18:33:06.065000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4593 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646239616130313036613531633635333331383537666431633865 Jan 23 18:33:06.065000 audit: BPF prog-id=198 op=LOAD Jan 23 18:33:06.065000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4593 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646239616130313036613531633635333331383537666431633865 Jan 23 18:33:06.066000 audit: BPF prog-id=198 op=UNLOAD Jan 23 18:33:06.066000 audit[4606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4593 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646239616130313036613531633635333331383537666431633865 Jan 23 18:33:06.066000 audit: BPF prog-id=197 op=UNLOAD Jan 23 18:33:06.066000 audit[4606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4593 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646239616130313036613531633635333331383537666431633865 Jan 23 18:33:06.066000 audit: BPF prog-id=199 op=LOAD Jan 23 18:33:06.066000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4593 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836646239616130313036613531633635333331383537666431633865 Jan 23 18:33:06.095366 containerd[1695]: time="2026-01-23T18:33:06.095332241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4bbc,Uid:1bbaf1bf-0602-4b75-8639-4ec842393e67,Namespace:calico-system,Attempt:0,} returns sandbox id \"86db9aa0106a51c65331857fd1c8e1d0b5887d5620070fb7651748f6932ea08d\"" Jan 23 18:33:06.098209 containerd[1695]: time="2026-01-23T18:33:06.098167609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:33:06.105310 containerd[1695]: time="2026-01-23T18:33:06.105205454Z" level=info msg="connecting to shim b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072" address="unix:///run/containerd/s/71659699defa696c478d703c1f6ea30925e5b5e8497e3cb1f1100b0fabfd0b9c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:33:06.132025 systemd[1]: Started cri-containerd-b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072.scope - libcontainer container b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072. Jan 23 18:33:06.143000 audit: BPF prog-id=200 op=LOAD Jan 23 18:33:06.144000 audit: BPF prog-id=201 op=LOAD Jan 23 18:33:06.144000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663566626136616131323231303335653039303163656463303266 Jan 23 18:33:06.144000 audit: BPF prog-id=201 op=UNLOAD Jan 23 18:33:06.144000 audit[4658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663566626136616131323231303335653039303163656463303266 Jan 23 18:33:06.144000 audit: BPF prog-id=202 op=LOAD Jan 23 18:33:06.144000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663566626136616131323231303335653039303163656463303266 Jan 23 18:33:06.144000 audit: BPF prog-id=203 op=LOAD Jan 23 18:33:06.144000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663566626136616131323231303335653039303163656463303266 Jan 23 18:33:06.144000 audit: BPF prog-id=203 op=UNLOAD Jan 23 18:33:06.144000 audit[4658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663566626136616131323231303335653039303163656463303266 Jan 23 18:33:06.144000 audit: BPF prog-id=202 op=UNLOAD Jan 23 18:33:06.144000 audit[4658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663566626136616131323231303335653039303163656463303266 Jan 23 18:33:06.144000 audit: BPF prog-id=204 op=LOAD Jan 23 18:33:06.144000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663566626136616131323231303335653039303163656463303266 Jan 23 18:33:06.179020 containerd[1695]: time="2026-01-23T18:33:06.178891363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xmb4s,Uid:d1b9504d-be7a-4b41-b198-d33537aa128d,Namespace:calico-system,Attempt:0,} returns sandbox id \"b9f5fba6aa1221035e0901cedc02f2c66d1c1078e039c4cd927ba3268e584072\"" Jan 23 18:33:06.210073 systemd-networkd[1576]: calie15a66b829f: Gained IPv6LL Jan 23 18:33:06.424766 containerd[1695]: time="2026-01-23T18:33:06.424666994Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:06.427114 containerd[1695]: time="2026-01-23T18:33:06.427035418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:33:06.427268 containerd[1695]: time="2026-01-23T18:33:06.427201595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:06.427550 kubelet[2940]: E0123 18:33:06.427475 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:33:06.427923 kubelet[2940]: E0123 18:33:06.427565 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:33:06.428474 kubelet[2940]: E0123 18:33:06.428328 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:06.430533 containerd[1695]: time="2026-01-23T18:33:06.430368737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:33:06.714527 containerd[1695]: time="2026-01-23T18:33:06.714333660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65674f688d-kxr2f,Uid:fbf77e94-7d63-4cf9-9744-b692622d727e,Namespace:calico-system,Attempt:0,}" Jan 23 18:33:06.768236 containerd[1695]: time="2026-01-23T18:33:06.766975451Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:06.770697 containerd[1695]: time="2026-01-23T18:33:06.770603120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:33:06.770890 containerd[1695]: time="2026-01-23T18:33:06.770753336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:06.771202 kubelet[2940]: E0123 18:33:06.771118 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:33:06.773499 kubelet[2940]: E0123 18:33:06.771732 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:33:06.773499 kubelet[2940]: E0123 18:33:06.772035 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xmb4s_calico-system(d1b9504d-be7a-4b41-b198-d33537aa128d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:06.773499 kubelet[2940]: E0123 18:33:06.772108 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:33:06.773789 containerd[1695]: time="2026-01-23T18:33:06.772379056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:33:06.787440 systemd-networkd[1576]: cali1eff6cc97a1: Gained IPv6LL Jan 23 18:33:06.867961 systemd-networkd[1576]: cali92171128b85: Link UP Jan 23 18:33:06.869069 systemd-networkd[1576]: cali92171128b85: Gained carrier Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.782 [INFO][4688] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.804 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0 calico-kube-controllers-65674f688d- calico-system fbf77e94-7d63-4cf9-9744-b692622d727e 801 0 2026-01-23 18:32:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65674f688d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547-1-0-1-5b0cac0ed6 calico-kube-controllers-65674f688d-kxr2f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali92171128b85 [] [] }} ContainerID="c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" Namespace="calico-system" Pod="calico-kube-controllers-65674f688d-kxr2f" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.804 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" Namespace="calico-system" Pod="calico-kube-controllers-65674f688d-kxr2f" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.827 [INFO][4700] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" HandleID="k8s-pod-network.c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.827 [INFO][4700] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" HandleID="k8s-pod-network.c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-1-0-1-5b0cac0ed6", "pod":"calico-kube-controllers-65674f688d-kxr2f", "timestamp":"2026-01-23 18:33:06.827162211 +0000 UTC"}, Hostname:"ci-4547-1-0-1-5b0cac0ed6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.827 [INFO][4700] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.827 [INFO][4700] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.827 [INFO][4700] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-1-5b0cac0ed6' Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.835 [INFO][4700] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.840 [INFO][4700] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.844 [INFO][4700] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.846 [INFO][4700] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.848 [INFO][4700] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.848 [INFO][4700] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.850 [INFO][4700] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7 Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.854 [INFO][4700] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.861 [INFO][4700] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.70/26] block=192.168.123.64/26 handle="k8s-pod-network.c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.862 [INFO][4700] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.70/26] handle="k8s-pod-network.c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.862 [INFO][4700] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:33:06.881089 containerd[1695]: 2026-01-23 18:33:06.862 [INFO][4700] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.70/26] IPv6=[] ContainerID="c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" HandleID="k8s-pod-network.c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0" Jan 23 18:33:06.882733 containerd[1695]: 2026-01-23 18:33:06.863 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" Namespace="calico-system" Pod="calico-kube-controllers-65674f688d-kxr2f" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0", GenerateName:"calico-kube-controllers-65674f688d-", Namespace:"calico-system", SelfLink:"", UID:"fbf77e94-7d63-4cf9-9744-b692622d727e", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65674f688d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"", Pod:"calico-kube-controllers-65674f688d-kxr2f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92171128b85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:06.882733 containerd[1695]: 2026-01-23 18:33:06.864 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.70/32] ContainerID="c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" Namespace="calico-system" Pod="calico-kube-controllers-65674f688d-kxr2f" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0" Jan 23 18:33:06.882733 containerd[1695]: 2026-01-23 18:33:06.864 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92171128b85 ContainerID="c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" Namespace="calico-system" Pod="calico-kube-controllers-65674f688d-kxr2f" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0" Jan 23 18:33:06.882733 containerd[1695]: 2026-01-23 18:33:06.869 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" Namespace="calico-system" Pod="calico-kube-controllers-65674f688d-kxr2f" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0" Jan 23 18:33:06.882733 containerd[1695]: 2026-01-23 18:33:06.869 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" Namespace="calico-system" Pod="calico-kube-controllers-65674f688d-kxr2f" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0", GenerateName:"calico-kube-controllers-65674f688d-", Namespace:"calico-system", SelfLink:"", UID:"fbf77e94-7d63-4cf9-9744-b692622d727e", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65674f688d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7", Pod:"calico-kube-controllers-65674f688d-kxr2f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92171128b85", MAC:"5a:97:68:f0:93:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:06.882733 containerd[1695]: 2026-01-23 18:33:06.879 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" Namespace="calico-system" Pod="calico-kube-controllers-65674f688d-kxr2f" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--kube--controllers--65674f688d--kxr2f-eth0" Jan 23 18:33:06.916020 containerd[1695]: time="2026-01-23T18:33:06.915982955Z" level=info msg="connecting to shim c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7" address="unix:///run/containerd/s/d6e25fe8b0a44eefc6c4dbf6f0eb3e1b613e50dc8c3655146ed96a9c81158e4d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:33:06.941249 systemd[1]: Started cri-containerd-c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7.scope - libcontainer container c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7. Jan 23 18:33:06.943279 kubelet[2940]: E0123 18:33:06.943120 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:33:06.945943 kubelet[2940]: E0123 18:33:06.945918 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:33:06.974000 audit: BPF prog-id=205 op=LOAD Jan 23 18:33:06.977000 audit: BPF prog-id=206 op=LOAD Jan 23 18:33:06.977000 audit[4736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4724 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335616232346632613339623333346333626564393164623463303839 Jan 23 18:33:06.977000 audit: BPF prog-id=206 op=UNLOAD Jan 23 18:33:06.977000 audit[4736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335616232346632613339623333346333626564393164623463303839 Jan 23 18:33:06.977000 audit: BPF prog-id=207 op=LOAD Jan 23 18:33:06.977000 audit[4736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4724 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335616232346632613339623333346333626564393164623463303839 Jan 23 18:33:06.977000 audit: BPF prog-id=208 op=LOAD Jan 23 18:33:06.977000 audit[4736]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4724 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335616232346632613339623333346333626564393164623463303839 Jan 23 18:33:06.977000 audit: BPF prog-id=208 op=UNLOAD Jan 23 18:33:06.977000 audit[4736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335616232346632613339623333346333626564393164623463303839 Jan 23 18:33:06.977000 audit: BPF prog-id=207 op=UNLOAD Jan 23 18:33:06.977000 audit[4736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335616232346632613339623333346333626564393164623463303839 Jan 23 18:33:06.977000 audit: BPF prog-id=209 op=LOAD Jan 23 18:33:06.977000 audit[4736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4724 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:06.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335616232346632613339623333346333626564393164623463303839 Jan 23 18:33:07.014245 containerd[1695]: time="2026-01-23T18:33:07.014214257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65674f688d-kxr2f,Uid:fbf77e94-7d63-4cf9-9744-b692622d727e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5ab24f2a39b334c3bed91db4c08987551264e64a7ae2d4f388b3ec4dcf9adb7\"" Jan 23 18:33:07.025000 audit[4766]: NETFILTER_CFG table=filter:119 family=2 entries=19 op=nft_register_rule pid=4766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:07.025000 audit[4766]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff1f9b9170 a2=0 a3=7fff1f9b915c items=0 ppid=3094 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:07.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:07.029000 audit[4766]: NETFILTER_CFG table=nat:120 family=2 entries=33 op=nft_register_chain pid=4766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:07.029000 audit[4766]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7fff1f9b9170 a2=0 a3=7fff1f9b915c items=0 ppid=3094 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:07.029000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:07.120727 containerd[1695]: time="2026-01-23T18:33:07.120541396Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:07.122877 containerd[1695]: time="2026-01-23T18:33:07.122833061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:33:07.124943 containerd[1695]: time="2026-01-23T18:33:07.122985937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:07.125242 kubelet[2940]: E0123 18:33:07.125198 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:33:07.125307 kubelet[2940]: E0123 18:33:07.125253 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:33:07.125455 kubelet[2940]: E0123 18:33:07.125432 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:07.125510 kubelet[2940]: E0123 18:33:07.125482 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:33:07.126450 containerd[1695]: time="2026-01-23T18:33:07.126422852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:33:07.298144 systemd-networkd[1576]: cali08a0610ef93: Gained IPv6LL Jan 23 18:33:07.460808 containerd[1695]: time="2026-01-23T18:33:07.460629105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:07.462554 containerd[1695]: time="2026-01-23T18:33:07.462451749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:33:07.462554 containerd[1695]: time="2026-01-23T18:33:07.462487254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:07.462752 kubelet[2940]: E0123 18:33:07.462690 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:33:07.462752 kubelet[2940]: E0123 18:33:07.462732 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:33:07.462863 kubelet[2940]: E0123 18:33:07.462810 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65674f688d-kxr2f_calico-system(fbf77e94-7d63-4cf9-9744-b692622d727e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:07.463145 kubelet[2940]: E0123 18:33:07.462876 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:33:07.874438 systemd-networkd[1576]: cali767627f0d9e: Gained IPv6LL Jan 23 18:33:07.957335 kubelet[2940]: E0123 18:33:07.957224 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:33:07.958268 kubelet[2940]: E0123 18:33:07.957448 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:33:07.959630 kubelet[2940]: E0123 18:33:07.959561 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:33:08.002064 systemd-networkd[1576]: cali92171128b85: Gained IPv6LL Jan 23 18:33:08.712896 containerd[1695]: time="2026-01-23T18:33:08.712491690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67759dc977-rb8xc,Uid:2ae423e7-492d-4f77-ad52-275afa909708,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:33:08.716726 containerd[1695]: time="2026-01-23T18:33:08.716634419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ff9vl,Uid:67c7d306-b5c4-44ef-b037-2a94e6f9e21a,Namespace:kube-system,Attempt:0,}" Jan 23 18:33:08.844536 systemd-networkd[1576]: cali33a9371b9c1: Link UP Jan 23 18:33:08.844751 systemd-networkd[1576]: cali33a9371b9c1: Gained carrier Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.752 [INFO][4804] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.766 [INFO][4804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0 calico-apiserver-67759dc977- calico-apiserver 2ae423e7-492d-4f77-ad52-275afa909708 800 0 2026-01-23 18:32:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67759dc977 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-1-0-1-5b0cac0ed6 calico-apiserver-67759dc977-rb8xc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali33a9371b9c1 [] [] }} ContainerID="d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-rb8xc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.766 [INFO][4804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-rb8xc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.798 [INFO][4827] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" HandleID="k8s-pod-network.d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.798 [INFO][4827] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" HandleID="k8s-pod-network.d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-1-0-1-5b0cac0ed6", "pod":"calico-apiserver-67759dc977-rb8xc", "timestamp":"2026-01-23 18:33:08.798488211 +0000 UTC"}, Hostname:"ci-4547-1-0-1-5b0cac0ed6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.798 [INFO][4827] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.798 [INFO][4827] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.798 [INFO][4827] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-1-5b0cac0ed6' Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.805 [INFO][4827] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.811 [INFO][4827] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.814 [INFO][4827] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.815 [INFO][4827] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.817 [INFO][4827] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.817 [INFO][4827] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.818 [INFO][4827] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3 Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.821 [INFO][4827] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.831 [INFO][4827] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.71/26] block=192.168.123.64/26 handle="k8s-pod-network.d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.831 [INFO][4827] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.71/26] handle="k8s-pod-network.d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.831 [INFO][4827] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:33:08.858303 containerd[1695]: 2026-01-23 18:33:08.831 [INFO][4827] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.71/26] IPv6=[] ContainerID="d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" HandleID="k8s-pod-network.d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0" Jan 23 18:33:08.859139 containerd[1695]: 2026-01-23 18:33:08.834 [INFO][4804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-rb8xc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0", GenerateName:"calico-apiserver-67759dc977-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ae423e7-492d-4f77-ad52-275afa909708", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67759dc977", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"", Pod:"calico-apiserver-67759dc977-rb8xc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33a9371b9c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:08.859139 containerd[1695]: 2026-01-23 18:33:08.834 [INFO][4804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.71/32] ContainerID="d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-rb8xc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0" Jan 23 18:33:08.859139 containerd[1695]: 2026-01-23 18:33:08.834 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33a9371b9c1 ContainerID="d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-rb8xc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0" Jan 23 18:33:08.859139 containerd[1695]: 2026-01-23 18:33:08.844 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-rb8xc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0" Jan 23 18:33:08.859139 containerd[1695]: 2026-01-23 18:33:08.845 [INFO][4804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-rb8xc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0", GenerateName:"calico-apiserver-67759dc977-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ae423e7-492d-4f77-ad52-275afa909708", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67759dc977", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3", Pod:"calico-apiserver-67759dc977-rb8xc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33a9371b9c1", MAC:"ae:fe:64:41:ae:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:08.859139 containerd[1695]: 2026-01-23 18:33:08.856 [INFO][4804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" Namespace="calico-apiserver" Pod="calico-apiserver-67759dc977-rb8xc" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-calico--apiserver--67759dc977--rb8xc-eth0" Jan 23 18:33:08.887504 containerd[1695]: time="2026-01-23T18:33:08.887445132Z" level=info msg="connecting to shim d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3" address="unix:///run/containerd/s/defea4e31fff732d3bee1567d1dfa719f41e076b84ea85c0c6824748c0c8c5b2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:33:08.912979 systemd[1]: Started cri-containerd-d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3.scope - libcontainer container d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3. Jan 23 18:33:08.930000 audit: BPF prog-id=210 op=LOAD Jan 23 18:33:08.931000 audit: BPF prog-id=211 op=LOAD Jan 23 18:33:08.931000 audit[4869]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4858 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:08.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666535636165356265336664336438663666643531323133313866 Jan 23 18:33:08.931000 audit: BPF prog-id=211 op=UNLOAD Jan 23 18:33:08.931000 audit[4869]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4858 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:08.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666535636165356265336664336438663666643531323133313866 Jan 23 18:33:08.932000 audit: BPF prog-id=212 op=LOAD Jan 23 18:33:08.932000 audit[4869]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4858 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:08.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666535636165356265336664336438663666643531323133313866 Jan 23 18:33:08.932000 audit: BPF prog-id=213 op=LOAD Jan 23 18:33:08.932000 audit[4869]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4858 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:08.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666535636165356265336664336438663666643531323133313866 Jan 23 18:33:08.932000 audit: BPF prog-id=213 op=UNLOAD Jan 23 18:33:08.932000 audit[4869]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4858 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:08.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666535636165356265336664336438663666643531323133313866 Jan 23 18:33:08.932000 audit: BPF prog-id=212 op=UNLOAD Jan 23 18:33:08.932000 audit[4869]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4858 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:08.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666535636165356265336664336438663666643531323133313866 Jan 23 18:33:08.932000 audit: BPF prog-id=214 op=LOAD Jan 23 18:33:08.932000 audit[4869]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4858 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:08.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666535636165356265336664336438663666643531323133313866 Jan 23 18:33:08.943608 systemd-networkd[1576]: cali20d6f4a0e84: Link UP Jan 23 18:33:08.944171 systemd-networkd[1576]: cali20d6f4a0e84: Gained carrier Jan 23 18:33:08.959452 kubelet[2940]: E0123 18:33:08.959343 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.764 [INFO][4814] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.780 [INFO][4814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0 coredns-66bc5c9577- kube-system 67c7d306-b5c4-44ef-b037-2a94e6f9e21a 797 0 2026-01-23 18:32:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-1-0-1-5b0cac0ed6 coredns-66bc5c9577-ff9vl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali20d6f4a0e84 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" Namespace="kube-system" Pod="coredns-66bc5c9577-ff9vl" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.780 [INFO][4814] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" Namespace="kube-system" Pod="coredns-66bc5c9577-ff9vl" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.809 [INFO][4836] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" HandleID="k8s-pod-network.d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.810 [INFO][4836] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" HandleID="k8s-pod-network.d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-1-0-1-5b0cac0ed6", "pod":"coredns-66bc5c9577-ff9vl", "timestamp":"2026-01-23 18:33:08.809937222 +0000 UTC"}, Hostname:"ci-4547-1-0-1-5b0cac0ed6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.810 [INFO][4836] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.832 [INFO][4836] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.832 [INFO][4836] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-1-0-1-5b0cac0ed6' Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.908 [INFO][4836] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.912 [INFO][4836] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.919 [INFO][4836] ipam/ipam.go 511: Trying affinity for 192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.921 [INFO][4836] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.922 [INFO][4836] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.64/26 host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.922 [INFO][4836] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.64/26 handle="k8s-pod-network.d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.924 [INFO][4836] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45 Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.928 [INFO][4836] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.64/26 handle="k8s-pod-network.d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.937 [INFO][4836] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.72/26] block=192.168.123.64/26 handle="k8s-pod-network.d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.937 [INFO][4836] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.72/26] handle="k8s-pod-network.d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" host="ci-4547-1-0-1-5b0cac0ed6" Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.938 [INFO][4836] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:33:08.970215 containerd[1695]: 2026-01-23 18:33:08.938 [INFO][4836] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.72/26] IPv6=[] ContainerID="d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" HandleID="k8s-pod-network.d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" Workload="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0" Jan 23 18:33:08.973066 containerd[1695]: 2026-01-23 18:33:08.939 [INFO][4814] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" Namespace="kube-system" Pod="coredns-66bc5c9577-ff9vl" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"67c7d306-b5c4-44ef-b037-2a94e6f9e21a", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"", Pod:"coredns-66bc5c9577-ff9vl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20d6f4a0e84", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:08.973066 containerd[1695]: 2026-01-23 18:33:08.939 [INFO][4814] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.72/32] ContainerID="d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" Namespace="kube-system" Pod="coredns-66bc5c9577-ff9vl" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0" Jan 23 18:33:08.973066 containerd[1695]: 2026-01-23 18:33:08.939 [INFO][4814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20d6f4a0e84 ContainerID="d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" Namespace="kube-system" Pod="coredns-66bc5c9577-ff9vl" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0" Jan 23 18:33:08.973066 containerd[1695]: 2026-01-23 18:33:08.944 [INFO][4814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" Namespace="kube-system" Pod="coredns-66bc5c9577-ff9vl" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0" Jan 23 18:33:08.973066 containerd[1695]: 2026-01-23 18:33:08.945 [INFO][4814] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" Namespace="kube-system" Pod="coredns-66bc5c9577-ff9vl" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"67c7d306-b5c4-44ef-b037-2a94e6f9e21a", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-1-0-1-5b0cac0ed6", ContainerID:"d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45", Pod:"coredns-66bc5c9577-ff9vl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20d6f4a0e84", MAC:"d6:57:f3:b7:a8:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:33:08.973257 containerd[1695]: 2026-01-23 18:33:08.967 [INFO][4814] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" Namespace="kube-system" Pod="coredns-66bc5c9577-ff9vl" WorkloadEndpoint="ci--4547--1--0--1--5b0cac0ed6-k8s-coredns--66bc5c9577--ff9vl-eth0" Jan 23 18:33:09.004390 containerd[1695]: time="2026-01-23T18:33:09.004321474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67759dc977-rb8xc,Uid:2ae423e7-492d-4f77-ad52-275afa909708,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d7fe5cae5be3fd3d8f6fd5121318fc1459df71b4aefd1082d96cb71a12096aa3\"" Jan 23 18:33:09.005711 containerd[1695]: time="2026-01-23T18:33:09.005674027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:33:09.010986 containerd[1695]: time="2026-01-23T18:33:09.010943646Z" level=info msg="connecting to shim d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45" address="unix:///run/containerd/s/4852269c04d852bbaf20fecc63332eb00c9da1617f73b7a0bd99cde9e6583cca" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:33:09.038101 systemd[1]: Started cri-containerd-d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45.scope - libcontainer container d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45. Jan 23 18:33:09.047000 audit: BPF prog-id=215 op=LOAD Jan 23 18:33:09.047000 audit: BPF prog-id=216 op=LOAD Jan 23 18:33:09.047000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4910 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623966626662333037393933613361363561323336303836363536 Jan 23 18:33:09.047000 audit: BPF prog-id=216 op=UNLOAD Jan 23 18:33:09.047000 audit[4920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623966626662333037393933613361363561323336303836363536 Jan 23 18:33:09.047000 audit: BPF prog-id=217 op=LOAD Jan 23 18:33:09.047000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4910 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623966626662333037393933613361363561323336303836363536 Jan 23 18:33:09.047000 audit: BPF prog-id=218 op=LOAD Jan 23 18:33:09.047000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4910 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623966626662333037393933613361363561323336303836363536 Jan 23 18:33:09.047000 audit: BPF prog-id=218 op=UNLOAD Jan 23 18:33:09.047000 audit[4920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623966626662333037393933613361363561323336303836363536 Jan 23 18:33:09.047000 audit: BPF prog-id=217 op=UNLOAD Jan 23 18:33:09.047000 audit[4920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623966626662333037393933613361363561323336303836363536 Jan 23 18:33:09.047000 audit: BPF prog-id=219 op=LOAD Jan 23 18:33:09.047000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4910 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435623966626662333037393933613361363561323336303836363536 Jan 23 18:33:09.083235 containerd[1695]: time="2026-01-23T18:33:09.083142588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ff9vl,Uid:67c7d306-b5c4-44ef-b037-2a94e6f9e21a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45\"" Jan 23 18:33:09.089310 containerd[1695]: time="2026-01-23T18:33:09.089270168Z" level=info msg="CreateContainer within sandbox \"d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:33:09.099273 containerd[1695]: time="2026-01-23T18:33:09.099242929Z" level=info msg="Container 20f89a0f81323b3204d3958a62cdb46deb4ea1a2315a9ec16c0d2ae1b4f36234: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:33:09.107773 containerd[1695]: time="2026-01-23T18:33:09.107742681Z" level=info msg="CreateContainer within sandbox \"d5b9fbfb307993a3a65a236086656ea77945f688f722b016990b9fd37a76ae45\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20f89a0f81323b3204d3958a62cdb46deb4ea1a2315a9ec16c0d2ae1b4f36234\"" Jan 23 18:33:09.109340 containerd[1695]: time="2026-01-23T18:33:09.108390906Z" level=info msg="StartContainer for \"20f89a0f81323b3204d3958a62cdb46deb4ea1a2315a9ec16c0d2ae1b4f36234\"" Jan 23 18:33:09.110211 containerd[1695]: time="2026-01-23T18:33:09.110190464Z" level=info msg="connecting to shim 20f89a0f81323b3204d3958a62cdb46deb4ea1a2315a9ec16c0d2ae1b4f36234" address="unix:///run/containerd/s/4852269c04d852bbaf20fecc63332eb00c9da1617f73b7a0bd99cde9e6583cca" protocol=ttrpc version=3 Jan 23 18:33:09.131983 systemd[1]: Started cri-containerd-20f89a0f81323b3204d3958a62cdb46deb4ea1a2315a9ec16c0d2ae1b4f36234.scope - libcontainer container 20f89a0f81323b3204d3958a62cdb46deb4ea1a2315a9ec16c0d2ae1b4f36234. Jan 23 18:33:09.143000 audit: BPF prog-id=220 op=LOAD Jan 23 18:33:09.143000 audit: BPF prog-id=221 op=LOAD Jan 23 18:33:09.143000 audit[4950]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4910 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663839613066383133323362333230346433393538613632636462 Jan 23 18:33:09.143000 audit: BPF prog-id=221 op=UNLOAD Jan 23 18:33:09.143000 audit[4950]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663839613066383133323362333230346433393538613632636462 Jan 23 18:33:09.143000 audit: BPF prog-id=222 op=LOAD Jan 23 18:33:09.143000 audit[4950]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4910 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663839613066383133323362333230346433393538613632636462 Jan 23 18:33:09.143000 audit: BPF prog-id=223 op=LOAD Jan 23 18:33:09.143000 audit[4950]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4910 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663839613066383133323362333230346433393538613632636462 Jan 23 18:33:09.143000 audit: BPF prog-id=223 op=UNLOAD Jan 23 18:33:09.143000 audit[4950]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663839613066383133323362333230346433393538613632636462 Jan 23 18:33:09.143000 audit: BPF prog-id=222 op=UNLOAD Jan 23 18:33:09.143000 audit[4950]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663839613066383133323362333230346433393538613632636462 Jan 23 18:33:09.143000 audit: BPF prog-id=224 op=LOAD Jan 23 18:33:09.143000 audit[4950]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4910 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:09.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663839613066383133323362333230346433393538613632636462 Jan 23 18:33:09.166944 containerd[1695]: time="2026-01-23T18:33:09.166851592Z" level=info msg="StartContainer for \"20f89a0f81323b3204d3958a62cdb46deb4ea1a2315a9ec16c0d2ae1b4f36234\" returns successfully" Jan 23 18:33:09.383845 containerd[1695]: time="2026-01-23T18:33:09.383799705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:09.385749 containerd[1695]: time="2026-01-23T18:33:09.385676235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:33:09.385925 containerd[1695]: time="2026-01-23T18:33:09.385711486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:09.385955 kubelet[2940]: E0123 18:33:09.385909 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:09.385955 kubelet[2940]: E0123 18:33:09.385945 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:09.386045 kubelet[2940]: E0123 18:33:09.386009 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67759dc977-rb8xc_calico-apiserver(2ae423e7-492d-4f77-ad52-275afa909708): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:09.386070 kubelet[2940]: E0123 18:33:09.386037 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:33:09.971301 kubelet[2940]: E0123 18:33:09.971209 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:33:09.987854 kubelet[2940]: I0123 18:33:09.987323 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ff9vl" podStartSLOduration=38.987303014 podStartE2EDuration="38.987303014s" podCreationTimestamp="2026-01-23 18:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:33:09.987153208 +0000 UTC m=+46.371924265" watchObservedRunningTime="2026-01-23 18:33:09.987303014 +0000 UTC m=+46.372074058" Jan 23 18:33:10.017000 audit[4998]: NETFILTER_CFG table=filter:121 family=2 entries=16 op=nft_register_rule pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:10.019344 kernel: kauditd_printk_skb: 200 callbacks suppressed Jan 23 18:33:10.019425 kernel: audit: type=1325 audit(1769193190.017:648): table=filter:121 family=2 entries=16 op=nft_register_rule pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:10.017000 audit[4998]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffca6a6cd60 a2=0 a3=7ffca6a6cd4c items=0 ppid=3094 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:10.070838 kernel: audit: type=1300 audit(1769193190.017:648): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffca6a6cd60 a2=0 a3=7ffca6a6cd4c items=0 ppid=3094 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:10.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:10.073828 kernel: audit: type=1327 audit(1769193190.017:648): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:10.068000 audit[4998]: NETFILTER_CFG table=nat:122 family=2 entries=42 op=nft_register_rule pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:10.068000 audit[4998]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffca6a6cd60 a2=0 a3=7ffca6a6cd4c items=0 ppid=3094 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:10.093892 kernel: audit: type=1325 audit(1769193190.068:649): table=nat:122 family=2 entries=42 op=nft_register_rule pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:10.093936 kernel: audit: type=1300 audit(1769193190.068:649): arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffca6a6cd60 a2=0 a3=7ffca6a6cd4c items=0 ppid=3094 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:10.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:10.100831 kernel: audit: type=1327 audit(1769193190.068:649): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:10.564569 systemd-networkd[1576]: cali33a9371b9c1: Gained IPv6LL Jan 23 18:33:10.690435 systemd-networkd[1576]: cali20d6f4a0e84: Gained IPv6LL Jan 23 18:33:10.975771 kubelet[2940]: E0123 18:33:10.975689 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:33:11.135000 audit[5021]: NETFILTER_CFG table=filter:123 family=2 entries=16 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:11.140854 kernel: audit: type=1325 audit(1769193191.135:650): table=filter:123 family=2 entries=16 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:11.135000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc8c7712f0 a2=0 a3=7ffc8c7712dc items=0 ppid=3094 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:11.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:11.150537 kernel: audit: type=1300 audit(1769193191.135:650): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc8c7712f0 a2=0 a3=7ffc8c7712dc items=0 ppid=3094 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:11.150659 kernel: audit: type=1327 audit(1769193191.135:650): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:11.169000 audit[5021]: NETFILTER_CFG table=nat:124 family=2 entries=54 op=nft_register_chain pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:11.169000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffc8c7712f0 a2=0 a3=7ffc8c7712dc items=0 ppid=3094 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:11.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:11.175039 kernel: audit: type=1325 audit(1769193191.169:651): table=nat:124 family=2 entries=54 op=nft_register_chain pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:13.455702 kubelet[2940]: I0123 18:33:13.455385 2940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:33:13.522000 audit[5068]: NETFILTER_CFG table=filter:125 family=2 entries=15 op=nft_register_rule pid=5068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:13.522000 audit[5068]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcdf9029e0 a2=0 a3=7ffcdf9029cc items=0 ppid=3094 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.522000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:13.528000 audit[5068]: NETFILTER_CFG table=nat:126 family=2 entries=25 op=nft_register_chain pid=5068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:33:13.528000 audit[5068]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffcdf9029e0 a2=0 a3=7ffcdf9029cc items=0 ppid=3094 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.528000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:33:13.864000 audit: BPF prog-id=225 op=LOAD Jan 23 18:33:13.864000 audit[5119]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8d25b1a0 a2=98 a3=1fffffffffffffff items=0 ppid=5103 pid=5119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.864000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:33:13.864000 audit: BPF prog-id=225 op=UNLOAD Jan 23 18:33:13.864000 audit[5119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc8d25b170 a3=0 items=0 ppid=5103 pid=5119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.864000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:33:13.864000 audit: BPF prog-id=226 op=LOAD Jan 23 18:33:13.864000 audit[5119]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8d25b080 a2=94 a3=3 items=0 ppid=5103 pid=5119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.864000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:33:13.864000 audit: BPF prog-id=226 op=UNLOAD Jan 23 18:33:13.864000 audit[5119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc8d25b080 a2=94 a3=3 items=0 ppid=5103 pid=5119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.864000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:33:13.864000 audit: BPF prog-id=227 op=LOAD Jan 23 18:33:13.864000 audit[5119]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8d25b0c0 a2=94 a3=7ffc8d25b2a0 items=0 ppid=5103 pid=5119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.864000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:33:13.864000 audit: BPF prog-id=227 op=UNLOAD Jan 23 18:33:13.864000 audit[5119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc8d25b0c0 a2=94 a3=7ffc8d25b2a0 items=0 ppid=5103 pid=5119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.864000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:33:13.865000 audit: BPF prog-id=228 op=LOAD Jan 23 18:33:13.865000 audit[5120]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffede89a930 a2=98 a3=3 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:13.865000 audit: BPF prog-id=228 op=UNLOAD Jan 23 18:33:13.865000 audit[5120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffede89a900 a3=0 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:13.865000 audit: BPF prog-id=229 op=LOAD Jan 23 18:33:13.865000 audit[5120]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffede89a720 a2=94 a3=54428f items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:13.865000 audit: BPF prog-id=229 op=UNLOAD Jan 23 18:33:13.865000 audit[5120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffede89a720 a2=94 a3=54428f items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:13.865000 audit: BPF prog-id=230 op=LOAD Jan 23 18:33:13.865000 audit[5120]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffede89a750 a2=94 a3=2 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:13.865000 audit: BPF prog-id=230 op=UNLOAD Jan 23 18:33:13.865000 audit[5120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffede89a750 a2=0 a3=2 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.038000 audit: BPF prog-id=231 op=LOAD Jan 23 18:33:14.038000 audit[5120]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffede89a610 a2=94 a3=1 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.038000 audit: BPF prog-id=231 op=UNLOAD Jan 23 18:33:14.038000 audit[5120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffede89a610 a2=94 a3=1 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.048000 audit: BPF prog-id=232 op=LOAD Jan 23 18:33:14.048000 audit[5120]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffede89a600 a2=94 a3=4 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.048000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.049000 audit: BPF prog-id=232 op=UNLOAD Jan 23 18:33:14.049000 audit[5120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffede89a600 a2=0 a3=4 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.049000 audit: BPF prog-id=233 op=LOAD Jan 23 18:33:14.049000 audit[5120]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffede89a460 a2=94 a3=5 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.049000 audit: BPF prog-id=233 op=UNLOAD Jan 23 18:33:14.049000 audit[5120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffede89a460 a2=0 a3=5 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.049000 audit: BPF prog-id=234 op=LOAD Jan 23 18:33:14.049000 audit[5120]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffede89a680 a2=94 a3=6 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.049000 audit: BPF prog-id=234 op=UNLOAD Jan 23 18:33:14.049000 audit[5120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffede89a680 a2=0 a3=6 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.049000 audit: BPF prog-id=235 op=LOAD Jan 23 18:33:14.049000 audit[5120]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffede899e30 a2=94 a3=88 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.050000 audit: BPF prog-id=236 op=LOAD Jan 23 18:33:14.050000 audit[5120]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffede899cb0 a2=94 a3=2 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.050000 audit: BPF prog-id=236 op=UNLOAD Jan 23 18:33:14.050000 audit[5120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffede899ce0 a2=0 a3=7ffede899de0 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.051000 audit: BPF prog-id=235 op=UNLOAD Jan 23 18:33:14.051000 audit[5120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3e244d10 a2=0 a3=b419fd28bc086306 items=0 ppid=5103 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.051000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:33:14.058000 audit: BPF prog-id=237 op=LOAD Jan 23 18:33:14.058000 audit[5123]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa8723810 a2=98 a3=1999999999999999 items=0 ppid=5103 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:33:14.058000 audit: BPF prog-id=237 op=UNLOAD Jan 23 18:33:14.058000 audit[5123]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffa87237e0 a3=0 items=0 ppid=5103 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:33:14.058000 audit: BPF prog-id=238 op=LOAD Jan 23 18:33:14.058000 audit[5123]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa87236f0 a2=94 a3=ffff items=0 ppid=5103 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:33:14.058000 audit: BPF prog-id=238 op=UNLOAD Jan 23 18:33:14.058000 audit[5123]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffa87236f0 a2=94 a3=ffff items=0 ppid=5103 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:33:14.058000 audit: BPF prog-id=239 op=LOAD Jan 23 18:33:14.058000 audit[5123]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa8723730 a2=94 a3=7fffa8723910 items=0 ppid=5103 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:33:14.058000 audit: BPF prog-id=239 op=UNLOAD Jan 23 18:33:14.058000 audit[5123]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffa8723730 a2=94 a3=7fffa8723910 items=0 ppid=5103 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:33:14.120650 systemd-networkd[1576]: vxlan.calico: Link UP Jan 23 18:33:14.120657 systemd-networkd[1576]: vxlan.calico: Gained carrier Jan 23 18:33:14.147000 audit: BPF prog-id=240 op=LOAD Jan 23 18:33:14.147000 audit[5147]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9eaeade0 a2=98 a3=0 items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.147000 audit: BPF prog-id=240 op=UNLOAD Jan 23 18:33:14.147000 audit[5147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff9eaeadb0 a3=0 items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.147000 audit: BPF prog-id=241 op=LOAD Jan 23 18:33:14.147000 audit[5147]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9eaeabf0 a2=94 a3=54428f items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.147000 audit: BPF prog-id=241 op=UNLOAD Jan 23 18:33:14.147000 audit[5147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff9eaeabf0 a2=94 a3=54428f items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.147000 audit: BPF prog-id=242 op=LOAD Jan 23 18:33:14.147000 audit[5147]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9eaeac20 a2=94 a3=2 items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.147000 audit: BPF prog-id=242 op=UNLOAD Jan 23 18:33:14.147000 audit[5147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff9eaeac20 a2=0 a3=2 items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.147000 audit: BPF prog-id=243 op=LOAD Jan 23 18:33:14.147000 audit[5147]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9eaea9d0 a2=94 a3=4 items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.147000 audit: BPF prog-id=243 op=UNLOAD Jan 23 18:33:14.147000 audit[5147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff9eaea9d0 a2=94 a3=4 items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.147000 audit: BPF prog-id=244 op=LOAD Jan 23 18:33:14.147000 audit[5147]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9eaeaad0 a2=94 a3=7fff9eaeac50 items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.147000 audit: BPF prog-id=244 op=UNLOAD Jan 23 18:33:14.147000 audit[5147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff9eaeaad0 a2=0 a3=7fff9eaeac50 items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.148000 audit: BPF prog-id=245 op=LOAD Jan 23 18:33:14.148000 audit[5147]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9eaea200 a2=94 a3=2 items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.148000 audit: BPF prog-id=245 op=UNLOAD Jan 23 18:33:14.148000 audit[5147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff9eaea200 a2=0 a3=2 items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.148000 audit: BPF prog-id=246 op=LOAD Jan 23 18:33:14.148000 audit[5147]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9eaea300 a2=94 a3=30 items=0 ppid=5103 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:33:14.156000 audit: BPF prog-id=247 op=LOAD Jan 23 18:33:14.156000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd79129080 a2=98 a3=0 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.156000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.156000 audit: BPF prog-id=247 op=UNLOAD Jan 23 18:33:14.156000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd79129050 a3=0 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.156000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.156000 audit: BPF prog-id=248 op=LOAD Jan 23 18:33:14.156000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd79128e70 a2=94 a3=54428f items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.156000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.156000 audit: BPF prog-id=248 op=UNLOAD Jan 23 18:33:14.156000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd79128e70 a2=94 a3=54428f items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.156000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.156000 audit: BPF prog-id=249 op=LOAD Jan 23 18:33:14.156000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd79128ea0 a2=94 a3=2 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.156000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.156000 audit: BPF prog-id=249 op=UNLOAD Jan 23 18:33:14.156000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd79128ea0 a2=0 a3=2 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.156000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.318000 audit: BPF prog-id=250 op=LOAD Jan 23 18:33:14.318000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd79128d60 a2=94 a3=1 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.318000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.318000 audit: BPF prog-id=250 op=UNLOAD Jan 23 18:33:14.318000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd79128d60 a2=94 a3=1 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.318000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.329000 audit: BPF prog-id=251 op=LOAD Jan 23 18:33:14.329000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd79128d50 a2=94 a3=4 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.329000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.329000 audit: BPF prog-id=251 op=UNLOAD Jan 23 18:33:14.329000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd79128d50 a2=0 a3=4 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.329000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.330000 audit: BPF prog-id=252 op=LOAD Jan 23 18:33:14.330000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd79128bb0 a2=94 a3=5 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.330000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.330000 audit: BPF prog-id=252 op=UNLOAD Jan 23 18:33:14.330000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd79128bb0 a2=0 a3=5 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.330000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.330000 audit: BPF prog-id=253 op=LOAD Jan 23 18:33:14.330000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd79128dd0 a2=94 a3=6 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.330000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.330000 audit: BPF prog-id=253 op=UNLOAD Jan 23 18:33:14.330000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd79128dd0 a2=0 a3=6 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.330000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.330000 audit: BPF prog-id=254 op=LOAD Jan 23 18:33:14.330000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd79128580 a2=94 a3=88 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.330000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.331000 audit: BPF prog-id=255 op=LOAD Jan 23 18:33:14.331000 audit[5152]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd79128400 a2=94 a3=2 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.331000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.331000 audit: BPF prog-id=255 op=UNLOAD Jan 23 18:33:14.331000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd79128430 a2=0 a3=7ffd79128530 items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.331000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.331000 audit: BPF prog-id=254 op=UNLOAD Jan 23 18:33:14.331000 audit[5152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=38911d10 a2=0 a3=a765903e88a16a3e items=0 ppid=5103 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.331000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:33:14.349000 audit: BPF prog-id=246 op=UNLOAD Jan 23 18:33:14.349000 audit[5103]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000f922c0 a2=0 a3=0 items=0 ppid=4060 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.349000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 23 18:33:14.414000 audit[5183]: NETFILTER_CFG table=mangle:127 family=2 entries=16 op=nft_register_chain pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:33:14.414000 audit[5183]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd5e10ff00 a2=0 a3=7ffd5e10feec items=0 ppid=5103 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.414000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:33:14.428000 audit[5184]: NETFILTER_CFG table=raw:128 family=2 entries=21 op=nft_register_chain pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:33:14.428000 audit[5184]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd52eb0ff0 a2=0 a3=7ffd52eb0fdc items=0 ppid=5103 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.428000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:33:14.431000 audit[5189]: NETFILTER_CFG table=nat:129 family=2 entries=15 op=nft_register_chain pid=5189 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:33:14.431000 audit[5189]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe003fc870 a2=0 a3=7ffe003fc85c items=0 ppid=5103 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.431000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:33:14.440000 audit[5186]: NETFILTER_CFG table=filter:130 family=2 entries=321 op=nft_register_chain pid=5186 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:33:14.440000 audit[5186]: SYSCALL arch=c000003e syscall=46 success=yes exit=190616 a0=3 a1=7fff7c3fee20 a2=0 a3=7fff7c3fee0c items=0 ppid=5103 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:33:14.440000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:33:15.170163 systemd-networkd[1576]: vxlan.calico: Gained IPv6LL Jan 23 18:33:16.728915 containerd[1695]: time="2026-01-23T18:33:16.728569930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:33:17.071068 containerd[1695]: time="2026-01-23T18:33:17.070587753Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:17.073258 containerd[1695]: time="2026-01-23T18:33:17.073083462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:33:17.073258 containerd[1695]: time="2026-01-23T18:33:17.073181940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:17.073902 kubelet[2940]: E0123 18:33:17.073788 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:33:17.075191 kubelet[2940]: E0123 18:33:17.073911 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:33:17.075191 kubelet[2940]: E0123 18:33:17.074087 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f9f4c8f6-267pf_calico-system(61d5f304-fb8b-48e8-ae8d-711deece6e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:17.076598 containerd[1695]: time="2026-01-23T18:33:17.076288942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:33:17.440206 containerd[1695]: time="2026-01-23T18:33:17.440061553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:17.441738 containerd[1695]: time="2026-01-23T18:33:17.441643509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:33:17.441738 containerd[1695]: time="2026-01-23T18:33:17.441670936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:17.441924 kubelet[2940]: E0123 18:33:17.441873 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:33:17.441971 kubelet[2940]: E0123 18:33:17.441927 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:33:17.442036 kubelet[2940]: E0123 18:33:17.442019 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f9f4c8f6-267pf_calico-system(61d5f304-fb8b-48e8-ae8d-711deece6e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:17.442943 kubelet[2940]: E0123 18:33:17.442894 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:33:18.711449 containerd[1695]: time="2026-01-23T18:33:18.711370088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:33:19.059248 containerd[1695]: time="2026-01-23T18:33:19.059074254Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:19.060942 containerd[1695]: time="2026-01-23T18:33:19.060895635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:33:19.061203 containerd[1695]: time="2026-01-23T18:33:19.060995914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:19.061274 kubelet[2940]: E0123 18:33:19.061225 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:33:19.061579 kubelet[2940]: E0123 18:33:19.061291 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:33:19.061579 kubelet[2940]: E0123 18:33:19.061399 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xmb4s_calico-system(d1b9504d-be7a-4b41-b198-d33537aa128d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:19.061579 kubelet[2940]: E0123 18:33:19.061445 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:33:19.721858 containerd[1695]: time="2026-01-23T18:33:19.721450813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:33:20.293504 containerd[1695]: time="2026-01-23T18:33:20.293420445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:20.295528 containerd[1695]: time="2026-01-23T18:33:20.295414844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:33:20.295528 containerd[1695]: time="2026-01-23T18:33:20.295474816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:20.296356 kubelet[2940]: E0123 18:33:20.296004 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:33:20.296356 kubelet[2940]: E0123 18:33:20.296084 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:33:20.296983 kubelet[2940]: E0123 18:33:20.296295 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65674f688d-kxr2f_calico-system(fbf77e94-7d63-4cf9-9744-b692622d727e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:20.296983 kubelet[2940]: E0123 18:33:20.296458 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:33:20.297094 containerd[1695]: time="2026-01-23T18:33:20.296598084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:33:20.638204 containerd[1695]: time="2026-01-23T18:33:20.638106358Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:20.640627 containerd[1695]: time="2026-01-23T18:33:20.640550635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:33:20.640770 containerd[1695]: time="2026-01-23T18:33:20.640732959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:20.641171 kubelet[2940]: E0123 18:33:20.641100 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:33:20.641295 kubelet[2940]: E0123 18:33:20.641196 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:33:20.641451 kubelet[2940]: E0123 18:33:20.641347 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:20.644477 containerd[1695]: time="2026-01-23T18:33:20.644411813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:33:21.000465 containerd[1695]: time="2026-01-23T18:33:20.999789454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:21.005715 containerd[1695]: time="2026-01-23T18:33:21.005631569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:33:21.005884 containerd[1695]: time="2026-01-23T18:33:21.005851733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:21.006414 kubelet[2940]: E0123 18:33:21.006132 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:33:21.006414 kubelet[2940]: E0123 18:33:21.006204 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:33:21.006886 kubelet[2940]: E0123 18:33:21.006856 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:21.007793 containerd[1695]: time="2026-01-23T18:33:21.007325177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:33:21.007923 kubelet[2940]: E0123 18:33:21.007710 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:33:21.340695 containerd[1695]: time="2026-01-23T18:33:21.340530558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:21.343996 containerd[1695]: time="2026-01-23T18:33:21.343916911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:33:21.344148 containerd[1695]: time="2026-01-23T18:33:21.344048307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:21.344442 kubelet[2940]: E0123 18:33:21.344393 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:21.345281 kubelet[2940]: E0123 18:33:21.344992 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:21.345281 kubelet[2940]: E0123 18:33:21.345153 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67759dc977-nhm59_calico-apiserver(d7423672-957c-488d-baee-8a9e9c290e13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:21.345281 kubelet[2940]: E0123 18:33:21.345213 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:33:25.713999 containerd[1695]: time="2026-01-23T18:33:25.713424302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:33:26.069188 containerd[1695]: time="2026-01-23T18:33:26.068882074Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:26.071265 containerd[1695]: time="2026-01-23T18:33:26.071125829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:33:26.071265 containerd[1695]: time="2026-01-23T18:33:26.071230697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:26.071852 kubelet[2940]: E0123 18:33:26.071665 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:26.071852 kubelet[2940]: E0123 18:33:26.071719 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:26.072498 kubelet[2940]: E0123 18:33:26.072391 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67759dc977-rb8xc_calico-apiserver(2ae423e7-492d-4f77-ad52-275afa909708): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:26.072498 kubelet[2940]: E0123 18:33:26.072448 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:33:28.713250 kubelet[2940]: E0123 18:33:28.713128 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:33:29.711759 kubelet[2940]: E0123 18:33:29.711637 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:33:34.711374 kubelet[2940]: E0123 18:33:34.711302 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:33:34.713295 kubelet[2940]: E0123 18:33:34.711630 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:33:34.714027 kubelet[2940]: E0123 18:33:34.713916 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:33:38.710318 kubelet[2940]: E0123 18:33:38.710228 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:33:39.719109 containerd[1695]: time="2026-01-23T18:33:39.718759768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:33:40.058848 containerd[1695]: time="2026-01-23T18:33:40.057687000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:40.061989 containerd[1695]: time="2026-01-23T18:33:40.061903666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:33:40.061989 containerd[1695]: time="2026-01-23T18:33:40.061950768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:40.062284 kubelet[2940]: E0123 18:33:40.062182 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:33:40.062284 kubelet[2940]: E0123 18:33:40.062235 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:33:40.062923 kubelet[2940]: E0123 18:33:40.062348 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f9f4c8f6-267pf_calico-system(61d5f304-fb8b-48e8-ae8d-711deece6e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:40.063517 containerd[1695]: time="2026-01-23T18:33:40.063305269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:33:40.392848 containerd[1695]: time="2026-01-23T18:33:40.392573080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:40.394941 containerd[1695]: time="2026-01-23T18:33:40.394901809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:33:40.395772 containerd[1695]: time="2026-01-23T18:33:40.395005913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:40.395840 kubelet[2940]: E0123 18:33:40.395541 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:33:40.395840 kubelet[2940]: E0123 18:33:40.395597 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:33:40.395840 kubelet[2940]: E0123 18:33:40.395695 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f9f4c8f6-267pf_calico-system(61d5f304-fb8b-48e8-ae8d-711deece6e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:40.395840 kubelet[2940]: E0123 18:33:40.395734 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:33:42.711905 containerd[1695]: time="2026-01-23T18:33:42.710800151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:33:43.070948 containerd[1695]: time="2026-01-23T18:33:43.070781550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:43.074022 containerd[1695]: time="2026-01-23T18:33:43.073976147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:33:43.074144 containerd[1695]: time="2026-01-23T18:33:43.074077652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:43.074800 kubelet[2940]: E0123 18:33:43.074296 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:33:43.074800 kubelet[2940]: E0123 18:33:43.074353 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:33:43.074800 kubelet[2940]: E0123 18:33:43.074440 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xmb4s_calico-system(d1b9504d-be7a-4b41-b198-d33537aa128d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:43.074800 kubelet[2940]: E0123 18:33:43.074479 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:33:46.709875 containerd[1695]: time="2026-01-23T18:33:46.709748660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:33:47.045490 containerd[1695]: time="2026-01-23T18:33:47.045293920Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:47.047932 containerd[1695]: time="2026-01-23T18:33:47.047792713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:33:47.047932 containerd[1695]: time="2026-01-23T18:33:47.047888662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:47.048199 kubelet[2940]: E0123 18:33:47.048150 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:33:47.048521 kubelet[2940]: E0123 18:33:47.048221 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:33:47.048521 kubelet[2940]: E0123 18:33:47.048501 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:47.050491 containerd[1695]: time="2026-01-23T18:33:47.050280422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:33:47.397964 containerd[1695]: time="2026-01-23T18:33:47.397781510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:47.399918 containerd[1695]: time="2026-01-23T18:33:47.399883479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:33:47.399987 containerd[1695]: time="2026-01-23T18:33:47.399960775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:47.400159 kubelet[2940]: E0123 18:33:47.400128 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:33:47.400207 kubelet[2940]: E0123 18:33:47.400169 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:33:47.400355 kubelet[2940]: E0123 18:33:47.400340 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65674f688d-kxr2f_calico-system(fbf77e94-7d63-4cf9-9744-b692622d727e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:47.400386 kubelet[2940]: E0123 18:33:47.400370 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:33:47.400591 containerd[1695]: time="2026-01-23T18:33:47.400573755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:33:47.748057 containerd[1695]: time="2026-01-23T18:33:47.747864327Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:47.749599 containerd[1695]: time="2026-01-23T18:33:47.749567560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:33:47.749685 containerd[1695]: time="2026-01-23T18:33:47.749637759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:47.750183 kubelet[2940]: E0123 18:33:47.749876 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:33:47.750183 kubelet[2940]: E0123 18:33:47.749929 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:33:47.750429 kubelet[2940]: E0123 18:33:47.750349 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:47.750831 kubelet[2940]: E0123 18:33:47.750503 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:33:49.712085 containerd[1695]: time="2026-01-23T18:33:49.711883971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:33:50.049090 containerd[1695]: time="2026-01-23T18:33:50.048981303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:50.050893 containerd[1695]: time="2026-01-23T18:33:50.050857266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:33:50.050983 containerd[1695]: time="2026-01-23T18:33:50.050925077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:50.052233 kubelet[2940]: E0123 18:33:50.051125 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:50.052670 kubelet[2940]: E0123 18:33:50.052524 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:50.052670 kubelet[2940]: E0123 18:33:50.052609 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67759dc977-nhm59_calico-apiserver(d7423672-957c-488d-baee-8a9e9c290e13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:50.052670 kubelet[2940]: E0123 18:33:50.052637 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:33:51.710626 containerd[1695]: time="2026-01-23T18:33:51.710428533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:33:52.050067 containerd[1695]: time="2026-01-23T18:33:52.049658244Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:33:52.052467 containerd[1695]: time="2026-01-23T18:33:52.052309560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:33:52.052467 containerd[1695]: time="2026-01-23T18:33:52.052346077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:33:52.052690 kubelet[2940]: E0123 18:33:52.052581 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:52.052690 kubelet[2940]: E0123 18:33:52.052620 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:33:52.052690 kubelet[2940]: E0123 18:33:52.052680 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67759dc977-rb8xc_calico-apiserver(2ae423e7-492d-4f77-ad52-275afa909708): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:33:52.053368 kubelet[2940]: E0123 18:33:52.052707 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:33:53.716592 kubelet[2940]: E0123 18:33:53.716513 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:33:54.710244 kubelet[2940]: E0123 18:33:54.709996 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:33:58.711568 kubelet[2940]: E0123 18:33:58.711301 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:34:00.711892 kubelet[2940]: E0123 18:34:00.711574 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:34:01.712811 kubelet[2940]: E0123 18:34:01.712679 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:34:05.712286 kubelet[2940]: E0123 18:34:05.712214 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:34:07.712385 kubelet[2940]: E0123 18:34:07.712329 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:34:07.712924 kubelet[2940]: E0123 18:34:07.712797 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:34:11.711693 kubelet[2940]: E0123 18:34:11.711653 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:34:12.710476 kubelet[2940]: E0123 18:34:12.710386 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:34:13.721418 kubelet[2940]: E0123 18:34:13.721038 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:34:17.712659 kubelet[2940]: E0123 18:34:17.712379 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:34:18.710028 kubelet[2940]: E0123 18:34:18.709737 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:34:20.710863 kubelet[2940]: E0123 18:34:20.710292 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:34:24.711492 kubelet[2940]: E0123 18:34:24.711436 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:34:25.712620 kubelet[2940]: E0123 18:34:25.711879 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:34:25.712620 kubelet[2940]: E0123 18:34:25.712322 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:34:29.713073 containerd[1695]: time="2026-01-23T18:34:29.712886217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:34:30.062387 containerd[1695]: time="2026-01-23T18:34:30.062054590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:34:30.064176 containerd[1695]: time="2026-01-23T18:34:30.064005822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:34:30.064493 containerd[1695]: time="2026-01-23T18:34:30.064359541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:34:30.064713 kubelet[2940]: E0123 18:34:30.064680 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:34:30.065037 kubelet[2940]: E0123 18:34:30.064721 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:34:30.065037 kubelet[2940]: E0123 18:34:30.064790 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f9f4c8f6-267pf_calico-system(61d5f304-fb8b-48e8-ae8d-711deece6e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:34:30.067160 containerd[1695]: time="2026-01-23T18:34:30.066245854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:34:30.399274 containerd[1695]: time="2026-01-23T18:34:30.399230387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:34:30.401407 containerd[1695]: time="2026-01-23T18:34:30.401348425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:34:30.401407 containerd[1695]: time="2026-01-23T18:34:30.401377411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:34:30.401848 kubelet[2940]: E0123 18:34:30.401640 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:34:30.401848 kubelet[2940]: E0123 18:34:30.401684 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:34:30.401848 kubelet[2940]: E0123 18:34:30.401757 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f9f4c8f6-267pf_calico-system(61d5f304-fb8b-48e8-ae8d-711deece6e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:34:30.401848 kubelet[2940]: E0123 18:34:30.401788 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:34:30.710831 kubelet[2940]: E0123 18:34:30.710410 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:34:34.711164 containerd[1695]: time="2026-01-23T18:34:34.711029862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:34:35.035212 containerd[1695]: time="2026-01-23T18:34:35.034846448Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:34:35.036469 containerd[1695]: time="2026-01-23T18:34:35.036394554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:34:35.036469 containerd[1695]: time="2026-01-23T18:34:35.036436418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:34:35.036696 kubelet[2940]: E0123 18:34:35.036660 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:34:35.036696 kubelet[2940]: E0123 18:34:35.036700 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:34:35.036971 kubelet[2940]: E0123 18:34:35.036767 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xmb4s_calico-system(d1b9504d-be7a-4b41-b198-d33537aa128d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:34:35.036971 kubelet[2940]: E0123 18:34:35.036798 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:34:36.710402 containerd[1695]: time="2026-01-23T18:34:36.710231770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:34:37.040909 containerd[1695]: time="2026-01-23T18:34:37.040501288Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:34:37.042843 containerd[1695]: time="2026-01-23T18:34:37.042690329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:34:37.043124 containerd[1695]: time="2026-01-23T18:34:37.042805715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:34:37.043451 kubelet[2940]: E0123 18:34:37.043349 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:34:37.043451 kubelet[2940]: E0123 18:34:37.043421 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:34:37.045860 kubelet[2940]: E0123 18:34:37.044794 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65674f688d-kxr2f_calico-system(fbf77e94-7d63-4cf9-9744-b692622d727e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:34:37.045860 kubelet[2940]: E0123 18:34:37.044907 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:34:39.712330 containerd[1695]: time="2026-01-23T18:34:39.712188980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:34:40.032835 containerd[1695]: time="2026-01-23T18:34:40.032616575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:34:40.036574 containerd[1695]: time="2026-01-23T18:34:40.036490936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:34:40.036732 containerd[1695]: time="2026-01-23T18:34:40.036703171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:34:40.037049 kubelet[2940]: E0123 18:34:40.037004 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:34:40.037710 kubelet[2940]: E0123 18:34:40.037499 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:34:40.038868 kubelet[2940]: E0123 18:34:40.037897 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67759dc977-nhm59_calico-apiserver(d7423672-957c-488d-baee-8a9e9c290e13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:34:40.038868 kubelet[2940]: E0123 18:34:40.038000 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:34:40.039005 containerd[1695]: time="2026-01-23T18:34:40.038399613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:34:40.380071 containerd[1695]: time="2026-01-23T18:34:40.379877702Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:34:40.383254 containerd[1695]: time="2026-01-23T18:34:40.382979811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:34:40.383254 containerd[1695]: time="2026-01-23T18:34:40.383181786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:34:40.384345 kubelet[2940]: E0123 18:34:40.383371 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:34:40.384345 kubelet[2940]: E0123 18:34:40.383415 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:34:40.384345 kubelet[2940]: E0123 18:34:40.383505 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:34:40.385376 containerd[1695]: time="2026-01-23T18:34:40.385002496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:34:40.739997 containerd[1695]: time="2026-01-23T18:34:40.739958318Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:34:40.742104 containerd[1695]: time="2026-01-23T18:34:40.742072693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:34:40.742163 containerd[1695]: time="2026-01-23T18:34:40.742146018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:34:40.742302 kubelet[2940]: E0123 18:34:40.742274 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:34:40.742337 kubelet[2940]: E0123 18:34:40.742314 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:34:40.742390 kubelet[2940]: E0123 18:34:40.742376 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:34:40.742443 kubelet[2940]: E0123 18:34:40.742409 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:34:45.712523 containerd[1695]: time="2026-01-23T18:34:45.712485323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:34:45.713103 kubelet[2940]: E0123 18:34:45.713074 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:34:46.035550 containerd[1695]: time="2026-01-23T18:34:46.035353174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:34:46.037267 containerd[1695]: time="2026-01-23T18:34:46.037180486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:34:46.037267 containerd[1695]: time="2026-01-23T18:34:46.037231021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:34:46.037531 kubelet[2940]: E0123 18:34:46.037494 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:34:46.037574 kubelet[2940]: E0123 18:34:46.037541 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:34:46.037637 kubelet[2940]: E0123 18:34:46.037621 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67759dc977-rb8xc_calico-apiserver(2ae423e7-492d-4f77-ad52-275afa909708): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:34:46.037683 kubelet[2940]: E0123 18:34:46.037655 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:34:47.709746 kubelet[2940]: E0123 18:34:47.709694 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:34:50.709751 kubelet[2940]: E0123 18:34:50.709568 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:34:52.709920 kubelet[2940]: E0123 18:34:52.709802 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:34:55.710934 kubelet[2940]: E0123 18:34:55.710895 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:34:56.463121 systemd[1]: Started sshd@9-10.0.6.238:22-68.220.241.50:52946.service - OpenSSH per-connection server daemon (68.220.241.50:52946). Jan 23 18:34:56.464339 kernel: kauditd_printk_skb: 206 callbacks suppressed Jan 23 18:34:56.464461 kernel: audit: type=1130 audit(1769193296.462:720): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.6.238:22-68.220.241.50:52946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:34:56.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.6.238:22-68.220.241.50:52946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:34:57.083000 audit[5352]: USER_ACCT pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.090595 sshd[5352]: Accepted publickey for core from 68.220.241.50 port 52946 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:34:57.095694 sshd-session[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:34:57.096862 kernel: audit: type=1101 audit(1769193297.083:721): pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.088000 audit[5352]: CRED_ACQ pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.108081 kernel: audit: type=1103 audit(1769193297.088:722): pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.108224 kernel: audit: type=1006 audit(1769193297.088:723): pid=5352 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 23 18:34:57.088000 audit[5352]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc00fa9060 a2=3 a3=0 items=0 ppid=1 pid=5352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:34:57.117526 kernel: audit: type=1300 audit(1769193297.088:723): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc00fa9060 a2=3 a3=0 items=0 ppid=1 pid=5352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:34:57.088000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:34:57.124702 kernel: audit: type=1327 audit(1769193297.088:723): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:34:57.135018 systemd-logind[1655]: New session 11 of user core. Jan 23 18:34:57.143161 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:34:57.150000 audit[5352]: USER_START pid=5352 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.161901 kernel: audit: type=1105 audit(1769193297.150:724): pid=5352 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.162058 kernel: audit: type=1103 audit(1769193297.155:725): pid=5360 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.155000 audit[5360]: CRED_ACQ pid=5360 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.629423 sshd[5360]: Connection closed by 68.220.241.50 port 52946 Jan 23 18:34:57.631342 sshd-session[5352]: pam_unix(sshd:session): session closed for user core Jan 23 18:34:57.633000 audit[5352]: USER_END pid=5352 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.637000 audit[5352]: CRED_DISP pid=5352 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.649682 kernel: audit: type=1106 audit(1769193297.633:726): pid=5352 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.649804 kernel: audit: type=1104 audit(1769193297.637:727): pid=5352 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:34:57.653337 systemd[1]: sshd@9-10.0.6.238:22-68.220.241.50:52946.service: Deactivated successfully. Jan 23 18:34:57.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.6.238:22-68.220.241.50:52946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:34:57.660635 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:34:57.664420 systemd-logind[1655]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:34:57.667023 systemd-logind[1655]: Removed session 11. Jan 23 18:34:58.710775 kubelet[2940]: E0123 18:34:58.710265 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:34:58.710775 kubelet[2940]: E0123 18:34:58.710304 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:34:59.714773 kubelet[2940]: E0123 18:34:59.714735 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:35:01.710318 kubelet[2940]: E0123 18:35:01.710256 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:35:02.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.6.238:22-68.220.241.50:47926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:02.735086 systemd[1]: Started sshd@10-10.0.6.238:22-68.220.241.50:47926.service - OpenSSH per-connection server daemon (68.220.241.50:47926). Jan 23 18:35:02.736423 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:35:02.736467 kernel: audit: type=1130 audit(1769193302.733:729): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.6.238:22-68.220.241.50:47926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:03.263000 audit[5400]: USER_ACCT pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.266403 sshd[5400]: Accepted publickey for core from 68.220.241.50 port 47926 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:03.276204 sshd-session[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:03.278956 kernel: audit: type=1101 audit(1769193303.263:730): pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.289939 kernel: audit: type=1103 audit(1769193303.272:731): pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.272000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.297885 kernel: audit: type=1006 audit(1769193303.272:732): pid=5400 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 23 18:35:03.272000 audit[5400]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe7b86e30 a2=3 a3=0 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:03.272000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:03.308848 kernel: audit: type=1300 audit(1769193303.272:732): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe7b86e30 a2=3 a3=0 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:03.308963 kernel: audit: type=1327 audit(1769193303.272:732): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:03.310623 systemd-logind[1655]: New session 12 of user core. Jan 23 18:35:03.315096 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:35:03.316000 audit[5400]: USER_START pid=5400 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.320000 audit[5404]: CRED_ACQ pid=5404 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.328737 kernel: audit: type=1105 audit(1769193303.316:733): pid=5400 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.328988 kernel: audit: type=1103 audit(1769193303.320:734): pid=5404 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.704918 sshd[5404]: Connection closed by 68.220.241.50 port 47926 Jan 23 18:35:03.706616 sshd-session[5400]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:03.713532 kernel: audit: type=1106 audit(1769193303.705:735): pid=5400 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.705000 audit[5400]: USER_END pid=5400 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.710894 systemd[1]: sshd@10-10.0.6.238:22-68.220.241.50:47926.service: Deactivated successfully. Jan 23 18:35:03.714545 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:35:03.705000 audit[5400]: CRED_DISP pid=5400 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.718734 systemd-logind[1655]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:35:03.721832 kernel: audit: type=1104 audit(1769193303.705:736): pid=5400 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:03.722619 systemd-logind[1655]: Removed session 12. Jan 23 18:35:03.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.6.238:22-68.220.241.50:47926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:06.710376 kubelet[2940]: E0123 18:35:06.710288 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:35:08.813951 systemd[1]: Started sshd@11-10.0.6.238:22-68.220.241.50:47942.service - OpenSSH per-connection server daemon (68.220.241.50:47942). Jan 23 18:35:08.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.6.238:22-68.220.241.50:47942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:08.815825 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:35:08.815873 kernel: audit: type=1130 audit(1769193308.812:738): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.6.238:22-68.220.241.50:47942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:09.354000 audit[5417]: USER_ACCT pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.356950 sshd[5417]: Accepted publickey for core from 68.220.241.50 port 47942 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:09.360870 kernel: audit: type=1101 audit(1769193309.354:739): pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.360000 audit[5417]: CRED_ACQ pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.362209 sshd-session[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:09.364854 kernel: audit: type=1103 audit(1769193309.360:740): pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.360000 audit[5417]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff31a899d0 a2=3 a3=0 items=0 ppid=1 pid=5417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:09.370644 kernel: audit: type=1006 audit(1769193309.360:741): pid=5417 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 23 18:35:09.370717 kernel: audit: type=1300 audit(1769193309.360:741): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff31a899d0 a2=3 a3=0 items=0 ppid=1 pid=5417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:09.360000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:09.374078 kernel: audit: type=1327 audit(1769193309.360:741): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:09.377276 systemd-logind[1655]: New session 13 of user core. Jan 23 18:35:09.381003 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:35:09.384000 audit[5417]: USER_START pid=5417 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.390841 kernel: audit: type=1105 audit(1769193309.384:742): pid=5417 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.390000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.394841 kernel: audit: type=1103 audit(1769193309.390:743): pid=5421 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.709852 sshd[5421]: Connection closed by 68.220.241.50 port 47942 Jan 23 18:35:09.711769 sshd-session[5417]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:09.715000 audit[5417]: USER_END pid=5417 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.727914 kernel: audit: type=1106 audit(1769193309.715:744): pid=5417 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.715000 audit[5417]: CRED_DISP pid=5417 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.730537 systemd[1]: sshd@11-10.0.6.238:22-68.220.241.50:47942.service: Deactivated successfully. Jan 23 18:35:09.734239 kubelet[2940]: E0123 18:35:09.733440 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:35:09.735041 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:35:09.736846 kernel: audit: type=1104 audit(1769193309.715:745): pid=5417 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:09.741010 systemd-logind[1655]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:35:09.743962 systemd-logind[1655]: Removed session 13. Jan 23 18:35:09.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.6.238:22-68.220.241.50:47942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:11.710531 kubelet[2940]: E0123 18:35:11.710310 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:35:11.710531 kubelet[2940]: E0123 18:35:11.710468 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:35:13.712866 kubelet[2940]: E0123 18:35:13.712824 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:35:14.821668 systemd[1]: Started sshd@12-10.0.6.238:22-68.220.241.50:44522.service - OpenSSH per-connection server daemon (68.220.241.50:44522). Jan 23 18:35:14.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.6.238:22-68.220.241.50:44522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:14.824882 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:35:14.824963 kernel: audit: type=1130 audit(1769193314.821:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.6.238:22-68.220.241.50:44522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:15.359000 audit[5434]: USER_ACCT pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.365272 sshd[5434]: Accepted publickey for core from 68.220.241.50 port 44522 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:15.366194 kernel: audit: type=1101 audit(1769193315.359:748): pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.369912 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:15.366000 audit[5434]: CRED_ACQ pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.373827 kernel: audit: type=1103 audit(1769193315.366:749): pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.380177 systemd-logind[1655]: New session 14 of user core. Jan 23 18:35:15.366000 audit[5434]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4ea3c970 a2=3 a3=0 items=0 ppid=1 pid=5434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:15.382731 kernel: audit: type=1006 audit(1769193315.366:750): pid=5434 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 23 18:35:15.382774 kernel: audit: type=1300 audit(1769193315.366:750): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4ea3c970 a2=3 a3=0 items=0 ppid=1 pid=5434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:15.388018 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:35:15.366000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:15.390929 kernel: audit: type=1327 audit(1769193315.366:750): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:15.391000 audit[5434]: USER_START pid=5434 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.396845 kernel: audit: type=1105 audit(1769193315.391:751): pid=5434 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.396000 audit[5439]: CRED_ACQ pid=5439 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.400829 kernel: audit: type=1103 audit(1769193315.396:752): pid=5439 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.710413 kubelet[2940]: E0123 18:35:15.710374 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:35:15.714534 sshd[5439]: Connection closed by 68.220.241.50 port 44522 Jan 23 18:35:15.715308 sshd-session[5434]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:15.716000 audit[5434]: USER_END pid=5434 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.719710 systemd-logind[1655]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:35:15.721493 systemd[1]: sshd@12-10.0.6.238:22-68.220.241.50:44522.service: Deactivated successfully. Jan 23 18:35:15.723120 kernel: audit: type=1106 audit(1769193315.716:753): pid=5434 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.716000 audit[5434]: CRED_DISP pid=5434 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.724973 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:35:15.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.6.238:22-68.220.241.50:44522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:15.728590 kernel: audit: type=1104 audit(1769193315.716:754): pid=5434 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:15.729580 systemd-logind[1655]: Removed session 14. Jan 23 18:35:15.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.6.238:22-68.220.241.50:44536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:15.821063 systemd[1]: Started sshd@13-10.0.6.238:22-68.220.241.50:44536.service - OpenSSH per-connection server daemon (68.220.241.50:44536). Jan 23 18:35:16.342831 sshd[5452]: Accepted publickey for core from 68.220.241.50 port 44536 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:16.341000 audit[5452]: USER_ACCT pid=5452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:16.343000 audit[5452]: CRED_ACQ pid=5452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:16.343000 audit[5452]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7c000fa0 a2=3 a3=0 items=0 ppid=1 pid=5452 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:16.343000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:16.344920 sshd-session[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:16.355207 systemd-logind[1655]: New session 15 of user core. Jan 23 18:35:16.361148 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:35:16.365000 audit[5452]: USER_START pid=5452 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:16.370000 audit[5456]: CRED_ACQ pid=5456 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:16.787689 sshd[5456]: Connection closed by 68.220.241.50 port 44536 Jan 23 18:35:16.788182 sshd-session[5452]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:16.788000 audit[5452]: USER_END pid=5452 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:16.788000 audit[5452]: CRED_DISP pid=5452 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:16.791400 systemd-logind[1655]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:35:16.793180 systemd[1]: sshd@13-10.0.6.238:22-68.220.241.50:44536.service: Deactivated successfully. Jan 23 18:35:16.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.6.238:22-68.220.241.50:44536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:16.796030 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:35:16.798552 systemd-logind[1655]: Removed session 15. Jan 23 18:35:16.895034 systemd[1]: Started sshd@14-10.0.6.238:22-68.220.241.50:44538.service - OpenSSH per-connection server daemon (68.220.241.50:44538). Jan 23 18:35:16.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.6.238:22-68.220.241.50:44538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:17.431000 audit[5466]: USER_ACCT pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:17.432696 sshd[5466]: Accepted publickey for core from 68.220.241.50 port 44538 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:17.433000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:17.433000 audit[5466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd60e19920 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:17.433000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:17.435203 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:17.442536 systemd-logind[1655]: New session 16 of user core. Jan 23 18:35:17.450106 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:35:17.455000 audit[5466]: USER_START pid=5466 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:17.457000 audit[5470]: CRED_ACQ pid=5470 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:17.710126 kubelet[2940]: E0123 18:35:17.709551 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:35:17.792026 sshd[5470]: Connection closed by 68.220.241.50 port 44538 Jan 23 18:35:17.793252 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:17.793000 audit[5466]: USER_END pid=5466 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:17.793000 audit[5466]: CRED_DISP pid=5466 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:17.797381 systemd[1]: sshd@14-10.0.6.238:22-68.220.241.50:44538.service: Deactivated successfully. Jan 23 18:35:17.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.6.238:22-68.220.241.50:44538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:17.799119 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:35:17.800976 systemd-logind[1655]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:35:17.801930 systemd-logind[1655]: Removed session 16. Jan 23 18:35:21.652164 update_engine[1660]: I20260123 18:35:21.651944 1660 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 18:35:21.652164 update_engine[1660]: I20260123 18:35:21.651993 1660 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 18:35:21.654354 update_engine[1660]: I20260123 18:35:21.653937 1660 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 18:35:21.654354 update_engine[1660]: I20260123 18:35:21.654299 1660 omaha_request_params.cc:62] Current group set to beta Jan 23 18:35:21.655872 update_engine[1660]: I20260123 18:35:21.655831 1660 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 18:35:21.655931 update_engine[1660]: I20260123 18:35:21.655923 1660 update_attempter.cc:643] Scheduling an action processor start. Jan 23 18:35:21.655972 update_engine[1660]: I20260123 18:35:21.655963 1660 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 18:35:21.656580 locksmithd[1699]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 18:35:21.664666 update_engine[1660]: I20260123 18:35:21.664642 1660 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 18:35:21.664785 update_engine[1660]: I20260123 18:35:21.664773 1660 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 18:35:21.664833 update_engine[1660]: I20260123 18:35:21.664824 1660 omaha_request_action.cc:272] Request: Jan 23 18:35:21.664833 update_engine[1660]: Jan 23 18:35:21.664833 update_engine[1660]: Jan 23 18:35:21.664833 update_engine[1660]: Jan 23 18:35:21.664833 update_engine[1660]: Jan 23 18:35:21.664833 update_engine[1660]: Jan 23 18:35:21.664833 update_engine[1660]: Jan 23 18:35:21.664833 update_engine[1660]: Jan 23 18:35:21.664833 update_engine[1660]: Jan 23 18:35:21.665004 update_engine[1660]: I20260123 18:35:21.664994 1660 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:35:21.670417 update_engine[1660]: I20260123 18:35:21.670394 1660 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:35:21.671832 update_engine[1660]: I20260123 18:35:21.671406 1660 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:35:21.677566 update_engine[1660]: E20260123 18:35:21.677481 1660 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 23 18:35:21.677566 update_engine[1660]: I20260123 18:35:21.677543 1660 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 18:35:22.711128 kubelet[2940]: E0123 18:35:22.711062 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:35:22.907367 systemd[1]: Started sshd@15-10.0.6.238:22-68.220.241.50:45986.service - OpenSSH per-connection server daemon (68.220.241.50:45986). Jan 23 18:35:22.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.6.238:22-68.220.241.50:45986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:22.909568 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 18:35:22.909682 kernel: audit: type=1130 audit(1769193322.907:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.6.238:22-68.220.241.50:45986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:23.453000 audit[5486]: USER_ACCT pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.454734 sshd[5486]: Accepted publickey for core from 68.220.241.50 port 45986 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:23.457498 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:23.453000 audit[5486]: CRED_ACQ pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.463678 kernel: audit: type=1101 audit(1769193323.453:775): pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.463731 kernel: audit: type=1103 audit(1769193323.453:776): pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.453000 audit[5486]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd8de4110 a2=3 a3=0 items=0 ppid=1 pid=5486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:23.480422 kernel: audit: type=1006 audit(1769193323.453:777): pid=5486 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 23 18:35:23.480475 kernel: audit: type=1300 audit(1769193323.453:777): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd8de4110 a2=3 a3=0 items=0 ppid=1 pid=5486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:23.484922 systemd-logind[1655]: New session 17 of user core. Jan 23 18:35:23.453000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:23.487860 kernel: audit: type=1327 audit(1769193323.453:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:23.488562 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:35:23.491000 audit[5486]: USER_START pid=5486 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.500136 kernel: audit: type=1105 audit(1769193323.491:778): pid=5486 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.500202 kernel: audit: type=1103 audit(1769193323.499:779): pid=5490 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.499000 audit[5490]: CRED_ACQ pid=5490 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.714114 kubelet[2940]: E0123 18:35:23.714021 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:35:23.865695 sshd[5490]: Connection closed by 68.220.241.50 port 45986 Jan 23 18:35:23.866995 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:23.868000 audit[5486]: USER_END pid=5486 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.874751 systemd[1]: sshd@15-10.0.6.238:22-68.220.241.50:45986.service: Deactivated successfully. Jan 23 18:35:23.874881 kernel: audit: type=1106 audit(1769193323.868:780): pid=5486 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.876426 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:35:23.879251 systemd-logind[1655]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:35:23.868000 audit[5486]: CRED_DISP pid=5486 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.882832 kernel: audit: type=1104 audit(1769193323.868:781): pid=5486 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:23.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.6.238:22-68.220.241.50:45986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:23.883878 systemd-logind[1655]: Removed session 17. Jan 23 18:35:24.711295 kubelet[2940]: E0123 18:35:24.711217 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:35:28.710486 kubelet[2940]: E0123 18:35:28.710442 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:35:28.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.6.238:22-68.220.241.50:46002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:28.977202 systemd[1]: Started sshd@16-10.0.6.238:22-68.220.241.50:46002.service - OpenSSH per-connection server daemon (68.220.241.50:46002). Jan 23 18:35:28.982146 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:35:28.982334 kernel: audit: type=1130 audit(1769193328.976:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.6.238:22-68.220.241.50:46002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:29.522000 audit[5504]: USER_ACCT pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.524071 sshd[5504]: Accepted publickey for core from 68.220.241.50 port 46002 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:29.528874 kernel: audit: type=1101 audit(1769193329.522:784): pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.528000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.531035 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:29.534068 kernel: audit: type=1103 audit(1769193329.528:785): pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.534419 kernel: audit: type=1006 audit(1769193329.529:786): pid=5504 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 23 18:35:29.529000 audit[5504]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbf49e150 a2=3 a3=0 items=0 ppid=1 pid=5504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:29.547605 kernel: audit: type=1300 audit(1769193329.529:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbf49e150 a2=3 a3=0 items=0 ppid=1 pid=5504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:29.547692 kernel: audit: type=1327 audit(1769193329.529:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:29.529000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:29.550490 systemd-logind[1655]: New session 18 of user core. Jan 23 18:35:29.555022 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:35:29.559000 audit[5504]: USER_START pid=5504 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.565904 kernel: audit: type=1105 audit(1769193329.559:787): pid=5504 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.562000 audit[5508]: CRED_ACQ pid=5508 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.571834 kernel: audit: type=1103 audit(1769193329.562:788): pid=5508 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.709643 kubelet[2940]: E0123 18:35:29.709196 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:35:29.930841 sshd[5508]: Connection closed by 68.220.241.50 port 46002 Jan 23 18:35:29.931964 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:29.932000 audit[5504]: USER_END pid=5504 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.936572 systemd[1]: sshd@16-10.0.6.238:22-68.220.241.50:46002.service: Deactivated successfully. Jan 23 18:35:29.938843 kernel: audit: type=1106 audit(1769193329.932:789): pid=5504 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.932000 audit[5504]: CRED_DISP pid=5504 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.940919 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:35:29.943035 kernel: audit: type=1104 audit(1769193329.932:790): pid=5504 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:29.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.6.238:22-68.220.241.50:46002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:29.945349 systemd-logind[1655]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:35:29.946422 systemd-logind[1655]: Removed session 18. Jan 23 18:35:30.711597 kubelet[2940]: E0123 18:35:30.711517 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:35:31.650847 update_engine[1660]: I20260123 18:35:31.650439 1660 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:35:31.650847 update_engine[1660]: I20260123 18:35:31.650527 1660 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:35:31.651381 update_engine[1660]: I20260123 18:35:31.651085 1660 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:35:31.657466 update_engine[1660]: E20260123 18:35:31.657428 1660 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 23 18:35:31.657582 update_engine[1660]: I20260123 18:35:31.657508 1660 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 18:35:35.035393 systemd[1]: Started sshd@17-10.0.6.238:22-68.220.241.50:40296.service - OpenSSH per-connection server daemon (68.220.241.50:40296). Jan 23 18:35:35.040442 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:35:35.040468 kernel: audit: type=1130 audit(1769193335.034:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.6.238:22-68.220.241.50:40296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:35.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.6.238:22-68.220.241.50:40296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:35.570000 audit[5548]: USER_ACCT pid=5548 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.573466 sshd-session[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:35.575292 sshd[5548]: Accepted publickey for core from 68.220.241.50 port 40296 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:35.577104 kernel: audit: type=1101 audit(1769193335.570:793): pid=5548 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.571000 audit[5548]: CRED_ACQ pid=5548 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.586958 kernel: audit: type=1103 audit(1769193335.571:794): pid=5548 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.590884 kernel: audit: type=1006 audit(1769193335.571:795): pid=5548 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 23 18:35:35.571000 audit[5548]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6bf60ac0 a2=3 a3=0 items=0 ppid=1 pid=5548 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:35.571000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:35.596359 systemd-logind[1655]: New session 19 of user core. Jan 23 18:35:35.597641 kernel: audit: type=1300 audit(1769193335.571:795): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6bf60ac0 a2=3 a3=0 items=0 ppid=1 pid=5548 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:35.597681 kernel: audit: type=1327 audit(1769193335.571:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:35.600054 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:35:35.603000 audit[5548]: USER_START pid=5548 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.606000 audit[5552]: CRED_ACQ pid=5552 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.611749 kernel: audit: type=1105 audit(1769193335.603:796): pid=5548 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.611829 kernel: audit: type=1103 audit(1769193335.606:797): pid=5552 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.937629 sshd[5552]: Connection closed by 68.220.241.50 port 40296 Jan 23 18:35:35.938127 sshd-session[5548]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:35.942000 audit[5548]: USER_END pid=5548 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.947183 systemd[1]: sshd@17-10.0.6.238:22-68.220.241.50:40296.service: Deactivated successfully. Jan 23 18:35:35.948780 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:35:35.949839 kernel: audit: type=1106 audit(1769193335.942:798): pid=5548 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.942000 audit[5548]: CRED_DISP pid=5548 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.954958 systemd-logind[1655]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:35:35.958832 kernel: audit: type=1104 audit(1769193335.942:799): pid=5548 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:35.959330 systemd-logind[1655]: Removed session 19. Jan 23 18:35:35.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.6.238:22-68.220.241.50:40296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:36.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.6.238:22-68.220.241.50:40302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:36.050362 systemd[1]: Started sshd@18-10.0.6.238:22-68.220.241.50:40302.service - OpenSSH per-connection server daemon (68.220.241.50:40302). Jan 23 18:35:36.589000 audit[5564]: USER_ACCT pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:36.590799 sshd[5564]: Accepted publickey for core from 68.220.241.50 port 40302 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:36.590000 audit[5564]: CRED_ACQ pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:36.590000 audit[5564]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed5f52e50 a2=3 a3=0 items=0 ppid=1 pid=5564 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:36.590000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:36.593155 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:36.600740 systemd-logind[1655]: New session 20 of user core. Jan 23 18:35:36.604405 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:35:36.607000 audit[5564]: USER_START pid=5564 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:36.609000 audit[5569]: CRED_ACQ pid=5569 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:36.710159 kubelet[2940]: E0123 18:35:36.710078 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:35:37.242634 sshd[5569]: Connection closed by 68.220.241.50 port 40302 Jan 23 18:35:37.243568 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:37.244000 audit[5564]: USER_END pid=5564 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:37.245000 audit[5564]: CRED_DISP pid=5564 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:37.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.6.238:22-68.220.241.50:40302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:37.248304 systemd-logind[1655]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:35:37.248970 systemd[1]: sshd@18-10.0.6.238:22-68.220.241.50:40302.service: Deactivated successfully. Jan 23 18:35:37.251707 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:35:37.255524 systemd-logind[1655]: Removed session 20. Jan 23 18:35:37.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.6.238:22-68.220.241.50:40316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:37.352224 systemd[1]: Started sshd@19-10.0.6.238:22-68.220.241.50:40316.service - OpenSSH per-connection server daemon (68.220.241.50:40316). Jan 23 18:35:37.712871 kubelet[2940]: E0123 18:35:37.712711 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:35:37.716238 kubelet[2940]: E0123 18:35:37.715846 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:35:37.915000 audit[5579]: USER_ACCT pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:37.916986 sshd[5579]: Accepted publickey for core from 68.220.241.50 port 40316 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:37.916000 audit[5579]: CRED_ACQ pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:37.916000 audit[5579]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcebfe4410 a2=3 a3=0 items=0 ppid=1 pid=5579 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:37.916000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:37.918187 sshd-session[5579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:37.924056 systemd-logind[1655]: New session 21 of user core. Jan 23 18:35:37.929004 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:35:37.932000 audit[5579]: USER_START pid=5579 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:37.933000 audit[5583]: CRED_ACQ pid=5583 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:38.708000 audit[5602]: NETFILTER_CFG table=filter:131 family=2 entries=26 op=nft_register_rule pid=5602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:35:38.708000 audit[5602]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffd58643d0 a2=0 a3=7fffd58643bc items=0 ppid=3094 pid=5602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:38.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:35:38.712000 audit[5602]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=5602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:35:38.712000 audit[5602]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffd58643d0 a2=0 a3=0 items=0 ppid=3094 pid=5602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:38.712000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:35:38.813486 sshd[5583]: Connection closed by 68.220.241.50 port 40316 Jan 23 18:35:38.813397 sshd-session[5579]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:38.814000 audit[5579]: USER_END pid=5579 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:38.815000 audit[5579]: CRED_DISP pid=5579 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:38.818653 systemd-logind[1655]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:35:38.818855 systemd[1]: sshd@19-10.0.6.238:22-68.220.241.50:40316.service: Deactivated successfully. Jan 23 18:35:38.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.6.238:22-68.220.241.50:40316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:38.820991 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:35:38.823666 systemd-logind[1655]: Removed session 21. Jan 23 18:35:38.935654 systemd[1]: Started sshd@20-10.0.6.238:22-68.220.241.50:40320.service - OpenSSH per-connection server daemon (68.220.241.50:40320). Jan 23 18:35:38.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.6.238:22-68.220.241.50:40320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:39.508000 audit[5607]: USER_ACCT pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:39.509473 sshd[5607]: Accepted publickey for core from 68.220.241.50 port 40320 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:39.510000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:39.510000 audit[5607]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff27ba1220 a2=3 a3=0 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:39.510000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:39.511903 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:39.517230 systemd-logind[1655]: New session 22 of user core. Jan 23 18:35:39.523097 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 18:35:39.526000 audit[5607]: USER_START pid=5607 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:39.528000 audit[5611]: CRED_ACQ pid=5611 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:39.747000 audit[5618]: NETFILTER_CFG table=filter:133 family=2 entries=38 op=nft_register_rule pid=5618 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:35:39.747000 audit[5618]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffce2f3a6e0 a2=0 a3=7ffce2f3a6cc items=0 ppid=3094 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:39.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:35:39.751000 audit[5618]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5618 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:35:39.751000 audit[5618]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffce2f3a6e0 a2=0 a3=0 items=0 ppid=3094 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:39.751000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:35:40.039900 sshd[5611]: Connection closed by 68.220.241.50 port 40320 Jan 23 18:35:40.039749 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:40.044202 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 23 18:35:40.044317 kernel: audit: type=1106 audit(1769193340.041:829): pid=5607 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:40.041000 audit[5607]: USER_END pid=5607 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:40.046499 systemd[1]: sshd@20-10.0.6.238:22-68.220.241.50:40320.service: Deactivated successfully. Jan 23 18:35:40.041000 audit[5607]: CRED_DISP pid=5607 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:40.050527 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 18:35:40.052513 systemd-logind[1655]: Session 22 logged out. Waiting for processes to exit. Jan 23 18:35:40.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.6.238:22-68.220.241.50:40320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:40.055123 kernel: audit: type=1104 audit(1769193340.041:830): pid=5607 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:40.055174 kernel: audit: type=1131 audit(1769193340.044:831): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.6.238:22-68.220.241.50:40320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:40.056437 systemd-logind[1655]: Removed session 22. Jan 23 18:35:40.146155 systemd[1]: Started sshd@21-10.0.6.238:22-68.220.241.50:40332.service - OpenSSH per-connection server daemon (68.220.241.50:40332). Jan 23 18:35:40.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.6.238:22-68.220.241.50:40332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:40.151852 kernel: audit: type=1130 audit(1769193340.145:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.6.238:22-68.220.241.50:40332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:40.671000 audit[5623]: USER_ACCT pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:40.674081 sshd[5623]: Accepted publickey for core from 68.220.241.50 port 40332 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:40.680538 sshd-session[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:40.684848 kernel: audit: type=1101 audit(1769193340.671:833): pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:40.677000 audit[5623]: CRED_ACQ pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:40.695847 kernel: audit: type=1103 audit(1769193340.677:834): pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:40.677000 audit[5623]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3516d330 a2=3 a3=0 items=0 ppid=1 pid=5623 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:40.706810 kernel: audit: type=1006 audit(1769193340.677:835): pid=5623 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 23 18:35:40.706897 kernel: audit: type=1300 audit(1769193340.677:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3516d330 a2=3 a3=0 items=0 ppid=1 pid=5623 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:40.704685 systemd-logind[1655]: New session 23 of user core. Jan 23 18:35:40.712766 kernel: audit: type=1327 audit(1769193340.677:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:40.677000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:40.712147 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 18:35:40.714538 kubelet[2940]: E0123 18:35:40.714499 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:35:40.719000 audit[5623]: USER_START pid=5623 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:40.730596 kernel: audit: type=1105 audit(1769193340.719:836): pid=5623 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:40.729000 audit[5627]: CRED_ACQ pid=5627 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:41.068872 sshd[5627]: Connection closed by 68.220.241.50 port 40332 Jan 23 18:35:41.070405 sshd-session[5623]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:41.072000 audit[5623]: USER_END pid=5623 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:41.072000 audit[5623]: CRED_DISP pid=5623 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:41.075709 systemd[1]: sshd@21-10.0.6.238:22-68.220.241.50:40332.service: Deactivated successfully. Jan 23 18:35:41.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.6.238:22-68.220.241.50:40332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:41.077607 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 18:35:41.078407 systemd-logind[1655]: Session 23 logged out. Waiting for processes to exit. Jan 23 18:35:41.079579 systemd-logind[1655]: Removed session 23. Jan 23 18:35:41.649855 update_engine[1660]: I20260123 18:35:41.649433 1660 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:35:41.649855 update_engine[1660]: I20260123 18:35:41.649544 1660 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:35:41.650434 update_engine[1660]: I20260123 18:35:41.650409 1660 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:35:41.657506 update_engine[1660]: E20260123 18:35:41.657440 1660 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 23 18:35:41.658025 update_engine[1660]: I20260123 18:35:41.657938 1660 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 18:35:41.712096 kubelet[2940]: E0123 18:35:41.711877 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:35:43.711852 kubelet[2940]: E0123 18:35:43.710313 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:35:44.210000 audit[5639]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=5639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:35:44.210000 audit[5639]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc1d9ef70 a2=0 a3=7fffc1d9ef5c items=0 ppid=3094 pid=5639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:44.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:35:44.216000 audit[5639]: NETFILTER_CFG table=nat:136 family=2 entries=104 op=nft_register_chain pid=5639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:35:44.216000 audit[5639]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffc1d9ef70 a2=0 a3=7fffc1d9ef5c items=0 ppid=3094 pid=5639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:44.216000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:35:46.185349 systemd[1]: Started sshd@22-10.0.6.238:22-68.220.241.50:48304.service - OpenSSH per-connection server daemon (68.220.241.50:48304). Jan 23 18:35:46.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.6.238:22-68.220.241.50:48304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:46.186797 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 23 18:35:46.186858 kernel: audit: type=1130 audit(1769193346.184:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.6.238:22-68.220.241.50:48304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:46.724000 audit[5641]: USER_ACCT pid=5641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:46.725481 sshd[5641]: Accepted publickey for core from 68.220.241.50 port 48304 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:46.732064 sshd-session[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:46.729000 audit[5641]: CRED_ACQ pid=5641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:46.738043 kernel: audit: type=1101 audit(1769193346.724:844): pid=5641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:46.738250 kernel: audit: type=1103 audit(1769193346.729:845): pid=5641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:46.743257 kernel: audit: type=1006 audit(1769193346.729:846): pid=5641 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 23 18:35:46.729000 audit[5641]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4aaf6270 a2=3 a3=0 items=0 ppid=1 pid=5641 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:46.751447 kernel: audit: type=1300 audit(1769193346.729:846): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4aaf6270 a2=3 a3=0 items=0 ppid=1 pid=5641 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:46.729000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:46.760971 systemd-logind[1655]: New session 24 of user core. Jan 23 18:35:46.763166 kernel: audit: type=1327 audit(1769193346.729:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:46.768275 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 18:35:46.772000 audit[5641]: USER_START pid=5641 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:46.779865 kernel: audit: type=1105 audit(1769193346.772:847): pid=5641 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:46.775000 audit[5645]: CRED_ACQ pid=5645 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:46.785846 kernel: audit: type=1103 audit(1769193346.775:848): pid=5645 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:47.113897 sshd[5645]: Connection closed by 68.220.241.50 port 48304 Jan 23 18:35:47.114314 sshd-session[5641]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:47.115000 audit[5641]: USER_END pid=5641 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:47.120283 systemd[1]: sshd@22-10.0.6.238:22-68.220.241.50:48304.service: Deactivated successfully. Jan 23 18:35:47.122611 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 18:35:47.122833 kernel: audit: type=1106 audit(1769193347.115:849): pid=5641 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:47.115000 audit[5641]: CRED_DISP pid=5641 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:47.127220 systemd-logind[1655]: Session 24 logged out. Waiting for processes to exit. Jan 23 18:35:47.127871 kernel: audit: type=1104 audit(1769193347.115:850): pid=5641 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:47.128267 systemd-logind[1655]: Removed session 24. Jan 23 18:35:47.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.6.238:22-68.220.241.50:48304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:48.710568 kubelet[2940]: E0123 18:35:48.710346 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:35:49.715494 kubelet[2940]: E0123 18:35:49.715113 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:35:50.709445 kubelet[2940]: E0123 18:35:50.709363 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:35:51.653044 update_engine[1660]: I20260123 18:35:51.652943 1660 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:35:51.653523 update_engine[1660]: I20260123 18:35:51.653133 1660 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:35:51.654000 update_engine[1660]: I20260123 18:35:51.653955 1660 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:35:51.659215 update_engine[1660]: E20260123 18:35:51.659153 1660 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 23 18:35:51.659417 update_engine[1660]: I20260123 18:35:51.659317 1660 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 18:35:51.659417 update_engine[1660]: I20260123 18:35:51.659348 1660 omaha_request_action.cc:617] Omaha request response: Jan 23 18:35:51.659575 update_engine[1660]: E20260123 18:35:51.659536 1660 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 18:35:51.661744 update_engine[1660]: I20260123 18:35:51.661659 1660 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 18:35:51.661744 update_engine[1660]: I20260123 18:35:51.661697 1660 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 18:35:51.661879 update_engine[1660]: I20260123 18:35:51.661751 1660 update_attempter.cc:306] Processing Done. Jan 23 18:35:51.661879 update_engine[1660]: E20260123 18:35:51.661787 1660 update_attempter.cc:619] Update failed. Jan 23 18:35:51.661937 update_engine[1660]: I20260123 18:35:51.661803 1660 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 18:35:51.662319 update_engine[1660]: I20260123 18:35:51.661931 1660 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 18:35:51.662319 update_engine[1660]: I20260123 18:35:51.661950 1660 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 18:35:51.662319 update_engine[1660]: I20260123 18:35:51.662106 1660 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 18:35:51.662319 update_engine[1660]: I20260123 18:35:51.662160 1660 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 18:35:51.662319 update_engine[1660]: I20260123 18:35:51.662177 1660 omaha_request_action.cc:272] Request: Jan 23 18:35:51.662319 update_engine[1660]: Jan 23 18:35:51.662319 update_engine[1660]: Jan 23 18:35:51.662319 update_engine[1660]: Jan 23 18:35:51.662319 update_engine[1660]: Jan 23 18:35:51.662319 update_engine[1660]: Jan 23 18:35:51.662319 update_engine[1660]: Jan 23 18:35:51.662319 update_engine[1660]: I20260123 18:35:51.662194 1660 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:35:51.662319 update_engine[1660]: I20260123 18:35:51.662251 1660 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:35:51.663271 update_engine[1660]: I20260123 18:35:51.662984 1660 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:35:51.663307 locksmithd[1699]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 18:35:51.668560 update_engine[1660]: E20260123 18:35:51.668483 1660 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 23 18:35:51.668687 update_engine[1660]: I20260123 18:35:51.668648 1660 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 18:35:51.668728 update_engine[1660]: I20260123 18:35:51.668677 1660 omaha_request_action.cc:617] Omaha request response: Jan 23 18:35:51.668728 update_engine[1660]: I20260123 18:35:51.668695 1660 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 18:35:51.668728 update_engine[1660]: I20260123 18:35:51.668710 1660 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 18:35:51.668834 update_engine[1660]: I20260123 18:35:51.668725 1660 update_attempter.cc:306] Processing Done. Jan 23 18:35:51.668834 update_engine[1660]: I20260123 18:35:51.668742 1660 update_attempter.cc:310] Error event sent. Jan 23 18:35:51.668834 update_engine[1660]: I20260123 18:35:51.668763 1660 update_check_scheduler.cc:74] Next update check in 42m9s Jan 23 18:35:51.670133 locksmithd[1699]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 18:35:52.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.6.238:22-68.220.241.50:48314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:52.220835 systemd[1]: Started sshd@23-10.0.6.238:22-68.220.241.50:48314.service - OpenSSH per-connection server daemon (68.220.241.50:48314). Jan 23 18:35:52.222518 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:35:52.222570 kernel: audit: type=1130 audit(1769193352.220:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.6.238:22-68.220.241.50:48314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:52.711973 containerd[1695]: time="2026-01-23T18:35:52.711919757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:35:52.768000 audit[5658]: USER_ACCT pid=5658 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:52.769571 sshd[5658]: Accepted publickey for core from 68.220.241.50 port 48314 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:52.773861 kernel: audit: type=1101 audit(1769193352.768:853): pid=5658 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:52.774416 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:52.772000 audit[5658]: CRED_ACQ pid=5658 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:52.780380 systemd-logind[1655]: New session 25 of user core. Jan 23 18:35:52.781591 kernel: audit: type=1103 audit(1769193352.772:854): pid=5658 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:52.772000 audit[5658]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4faa9e20 a2=3 a3=0 items=0 ppid=1 pid=5658 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:52.786916 kernel: audit: type=1006 audit(1769193352.772:855): pid=5658 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 23 18:35:52.786959 kernel: audit: type=1300 audit(1769193352.772:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4faa9e20 a2=3 a3=0 items=0 ppid=1 pid=5658 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:52.772000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:52.790877 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 18:35:52.792873 kernel: audit: type=1327 audit(1769193352.772:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:52.797000 audit[5658]: USER_START pid=5658 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:52.803834 kernel: audit: type=1105 audit(1769193352.797:856): pid=5658 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:52.799000 audit[5662]: CRED_ACQ pid=5662 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:52.808847 kernel: audit: type=1103 audit(1769193352.799:857): pid=5662 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:53.120440 sshd[5662]: Connection closed by 68.220.241.50 port 48314 Jan 23 18:35:53.120282 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:53.121000 audit[5658]: USER_END pid=5658 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:53.129870 kernel: audit: type=1106 audit(1769193353.121:858): pid=5658 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:53.129937 kernel: audit: type=1104 audit(1769193353.124:859): pid=5658 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:53.124000 audit[5658]: CRED_DISP pid=5658 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:53.130309 systemd[1]: sshd@23-10.0.6.238:22-68.220.241.50:48314.service: Deactivated successfully. Jan 23 18:35:53.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.6.238:22-68.220.241.50:48314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:53.134081 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 18:35:53.138871 systemd-logind[1655]: Session 25 logged out. Waiting for processes to exit. Jan 23 18:35:53.139584 systemd-logind[1655]: Removed session 25. Jan 23 18:35:53.231589 containerd[1695]: time="2026-01-23T18:35:53.231546409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:35:53.233191 containerd[1695]: time="2026-01-23T18:35:53.233162739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:35:53.233274 containerd[1695]: time="2026-01-23T18:35:53.233232581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:35:53.233884 kubelet[2940]: E0123 18:35:53.233551 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:35:53.233884 kubelet[2940]: E0123 18:35:53.233613 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:35:53.233884 kubelet[2940]: E0123 18:35:53.233737 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f9f4c8f6-267pf_calico-system(61d5f304-fb8b-48e8-ae8d-711deece6e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:35:53.235915 containerd[1695]: time="2026-01-23T18:35:53.235848826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:35:53.749055 containerd[1695]: time="2026-01-23T18:35:53.748986168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:35:53.751223 containerd[1695]: time="2026-01-23T18:35:53.751140422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:35:53.751318 containerd[1695]: time="2026-01-23T18:35:53.751274765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:35:53.751976 kubelet[2940]: E0123 18:35:53.751807 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:35:53.752445 kubelet[2940]: E0123 18:35:53.751986 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:35:53.752445 kubelet[2940]: E0123 18:35:53.752099 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f9f4c8f6-267pf_calico-system(61d5f304-fb8b-48e8-ae8d-711deece6e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:35:53.752445 kubelet[2940]: E0123 18:35:53.752161 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:35:54.710154 kubelet[2940]: E0123 18:35:54.710107 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:35:54.711062 kubelet[2940]: E0123 18:35:54.710449 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:35:58.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.6.238:22-68.220.241.50:35666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:58.228115 systemd[1]: Started sshd@24-10.0.6.238:22-68.220.241.50:35666.service - OpenSSH per-connection server daemon (68.220.241.50:35666). Jan 23 18:35:58.229964 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:35:58.230008 kernel: audit: type=1130 audit(1769193358.227:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.6.238:22-68.220.241.50:35666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:58.781000 audit[5682]: USER_ACCT pid=5682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:58.783963 sshd[5682]: Accepted publickey for core from 68.220.241.50 port 35666 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:35:58.787836 kernel: audit: type=1101 audit(1769193358.781:862): pid=5682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:58.788889 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:35:58.787000 audit[5682]: CRED_ACQ pid=5682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:58.794832 kernel: audit: type=1103 audit(1769193358.787:863): pid=5682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:58.798835 kernel: audit: type=1006 audit(1769193358.787:864): pid=5682 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 23 18:35:58.787000 audit[5682]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe1515560 a2=3 a3=0 items=0 ppid=1 pid=5682 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:58.804088 systemd-logind[1655]: New session 26 of user core. Jan 23 18:35:58.806835 kernel: audit: type=1300 audit(1769193358.787:864): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe1515560 a2=3 a3=0 items=0 ppid=1 pid=5682 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:35:58.787000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:58.810839 kernel: audit: type=1327 audit(1769193358.787:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:35:58.811086 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 18:35:58.816000 audit[5682]: USER_START pid=5682 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:58.821000 audit[5686]: CRED_ACQ pid=5686 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:58.824096 kernel: audit: type=1105 audit(1769193358.816:865): pid=5682 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:58.824148 kernel: audit: type=1103 audit(1769193358.821:866): pid=5686 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:59.131234 sshd[5686]: Connection closed by 68.220.241.50 port 35666 Jan 23 18:35:59.131851 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Jan 23 18:35:59.133000 audit[5682]: USER_END pid=5682 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:59.139838 kernel: audit: type=1106 audit(1769193359.133:867): pid=5682 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:59.135000 audit[5682]: CRED_DISP pid=5682 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:59.142089 systemd-logind[1655]: Session 26 logged out. Waiting for processes to exit. Jan 23 18:35:59.143337 systemd[1]: sshd@24-10.0.6.238:22-68.220.241.50:35666.service: Deactivated successfully. Jan 23 18:35:59.144260 kernel: audit: type=1104 audit(1769193359.135:868): pid=5682 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:35:59.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.6.238:22-68.220.241.50:35666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:35:59.146749 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 18:35:59.148526 systemd-logind[1655]: Removed session 26. Jan 23 18:36:02.711240 containerd[1695]: time="2026-01-23T18:36:02.710493366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:36:03.194176 containerd[1695]: time="2026-01-23T18:36:03.193976715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:36:03.196357 containerd[1695]: time="2026-01-23T18:36:03.196085084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:36:03.196357 containerd[1695]: time="2026-01-23T18:36:03.196168453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:36:03.197146 kubelet[2940]: E0123 18:36:03.197080 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:36:03.198287 kubelet[2940]: E0123 18:36:03.197979 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:36:03.199306 kubelet[2940]: E0123 18:36:03.198979 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xmb4s_calico-system(d1b9504d-be7a-4b41-b198-d33537aa128d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:36:03.199306 kubelet[2940]: E0123 18:36:03.199188 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:36:03.199492 containerd[1695]: time="2026-01-23T18:36:03.198566175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:36:03.743268 containerd[1695]: time="2026-01-23T18:36:03.743231896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:36:03.745035 containerd[1695]: time="2026-01-23T18:36:03.744988731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:36:03.745287 containerd[1695]: time="2026-01-23T18:36:03.745055595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:36:03.745487 kubelet[2940]: E0123 18:36:03.745349 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:36:03.745487 kubelet[2940]: E0123 18:36:03.745391 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:36:03.745766 kubelet[2940]: E0123 18:36:03.745715 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:36:03.746990 containerd[1695]: time="2026-01-23T18:36:03.746856259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:36:04.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.6.238:22-68.220.241.50:60332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:36:04.249239 systemd[1]: Started sshd@25-10.0.6.238:22-68.220.241.50:60332.service - OpenSSH per-connection server daemon (68.220.241.50:60332). Jan 23 18:36:04.250934 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:36:04.250999 kernel: audit: type=1130 audit(1769193364.247:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.6.238:22-68.220.241.50:60332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:36:04.257168 containerd[1695]: time="2026-01-23T18:36:04.257122286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:36:04.261052 containerd[1695]: time="2026-01-23T18:36:04.261001081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:36:04.261253 containerd[1695]: time="2026-01-23T18:36:04.261097086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:36:04.261295 kubelet[2940]: E0123 18:36:04.261262 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:36:04.261582 kubelet[2940]: E0123 18:36:04.261317 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:36:04.261582 kubelet[2940]: E0123 18:36:04.261392 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-p4bbc_calico-system(1bbaf1bf-0602-4b75-8639-4ec842393e67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:36:04.261582 kubelet[2940]: E0123 18:36:04.261433 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:36:04.710021 kubelet[2940]: E0123 18:36:04.709990 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:36:04.800000 audit[5724]: USER_ACCT pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:04.802659 sshd[5724]: Accepted publickey for core from 68.220.241.50 port 60332 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:36:04.805473 sshd-session[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:36:04.802000 audit[5724]: CRED_ACQ pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:04.810736 systemd-logind[1655]: New session 27 of user core. Jan 23 18:36:04.813036 kernel: audit: type=1101 audit(1769193364.800:871): pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:04.813217 kernel: audit: type=1103 audit(1769193364.802:872): pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:04.820161 kernel: audit: type=1006 audit(1769193364.802:873): pid=5724 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 23 18:36:04.819963 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 18:36:04.821830 kernel: audit: type=1300 audit(1769193364.802:873): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdff1d380 a2=3 a3=0 items=0 ppid=1 pid=5724 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:04.802000 audit[5724]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdff1d380 a2=3 a3=0 items=0 ppid=1 pid=5724 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:04.802000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:36:04.828000 audit[5724]: USER_START pid=5724 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:04.836975 kernel: audit: type=1327 audit(1769193364.802:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:36:04.837035 kernel: audit: type=1105 audit(1769193364.828:874): pid=5724 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:04.831000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:04.845858 kernel: audit: type=1103 audit(1769193364.831:875): pid=5728 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:05.166222 sshd[5728]: Connection closed by 68.220.241.50 port 60332 Jan 23 18:36:05.166705 sshd-session[5724]: pam_unix(sshd:session): session closed for user core Jan 23 18:36:05.166000 audit[5724]: USER_END pid=5724 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:05.174362 kernel: audit: type=1106 audit(1769193365.166:876): pid=5724 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:05.168000 audit[5724]: CRED_DISP pid=5724 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:05.176085 systemd[1]: sshd@25-10.0.6.238:22-68.220.241.50:60332.service: Deactivated successfully. Jan 23 18:36:05.177832 kernel: audit: type=1104 audit(1769193365.168:877): pid=5724 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:05.177561 systemd-logind[1655]: Session 27 logged out. Waiting for processes to exit. Jan 23 18:36:05.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.6.238:22-68.220.241.50:60332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:36:05.180475 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 18:36:05.184179 systemd-logind[1655]: Removed session 27. Jan 23 18:36:05.712542 kubelet[2940]: E0123 18:36:05.712476 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:36:07.713312 containerd[1695]: time="2026-01-23T18:36:07.713197894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:36:08.228253 containerd[1695]: time="2026-01-23T18:36:08.228205287Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:36:08.230048 containerd[1695]: time="2026-01-23T18:36:08.229924113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:36:08.230048 containerd[1695]: time="2026-01-23T18:36:08.230022680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:36:08.230823 kubelet[2940]: E0123 18:36:08.230749 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:36:08.230823 kubelet[2940]: E0123 18:36:08.230789 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:36:08.231240 kubelet[2940]: E0123 18:36:08.231178 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67759dc977-nhm59_calico-apiserver(d7423672-957c-488d-baee-8a9e9c290e13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:36:08.231240 kubelet[2940]: E0123 18:36:08.231211 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:36:08.711008 containerd[1695]: time="2026-01-23T18:36:08.710979426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:36:09.192228 containerd[1695]: time="2026-01-23T18:36:09.192023182Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:36:09.194404 containerd[1695]: time="2026-01-23T18:36:09.194254621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:36:09.194404 containerd[1695]: time="2026-01-23T18:36:09.194363768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:36:09.194588 kubelet[2940]: E0123 18:36:09.194550 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:36:09.195334 kubelet[2940]: E0123 18:36:09.194614 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:36:09.195334 kubelet[2940]: E0123 18:36:09.194725 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65674f688d-kxr2f_calico-system(fbf77e94-7d63-4cf9-9744-b692622d727e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:36:09.195334 kubelet[2940]: E0123 18:36:09.194777 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:36:10.281585 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:36:10.281675 kernel: audit: type=1130 audit(1769193370.278:879): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.6.238:22-68.220.241.50:60348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:36:10.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.6.238:22-68.220.241.50:60348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:36:10.279957 systemd[1]: Started sshd@26-10.0.6.238:22-68.220.241.50:60348.service - OpenSSH per-connection server daemon (68.220.241.50:60348). Jan 23 18:36:10.830000 audit[5740]: USER_ACCT pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:10.833181 sshd[5740]: Accepted publickey for core from 68.220.241.50 port 60348 ssh2: RSA SHA256:nKFqiC1tfI89CurykR2N2Qujx/39ZzIdQ7HqDt8w/Gw Jan 23 18:36:10.837400 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:36:10.845295 kernel: audit: type=1101 audit(1769193370.830:880): pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:10.845381 kernel: audit: type=1103 audit(1769193370.834:881): pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:10.834000 audit[5740]: CRED_ACQ pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:10.844956 systemd-logind[1655]: New session 28 of user core. Jan 23 18:36:10.853077 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 18:36:10.855521 kernel: audit: type=1006 audit(1769193370.834:882): pid=5740 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 23 18:36:10.834000 audit[5740]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd32258460 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:10.834000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:36:10.867233 kernel: audit: type=1300 audit(1769193370.834:882): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd32258460 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:10.867304 kernel: audit: type=1327 audit(1769193370.834:882): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:36:10.868855 kernel: audit: type=1105 audit(1769193370.856:883): pid=5740 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:10.856000 audit[5740]: USER_START pid=5740 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:10.862000 audit[5744]: CRED_ACQ pid=5744 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:10.883879 kernel: audit: type=1103 audit(1769193370.862:884): pid=5744 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:11.225221 sshd[5744]: Connection closed by 68.220.241.50 port 60348 Jan 23 18:36:11.226119 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Jan 23 18:36:11.228000 audit[5740]: USER_END pid=5740 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:11.236031 systemd[1]: sshd@26-10.0.6.238:22-68.220.241.50:60348.service: Deactivated successfully. Jan 23 18:36:11.237913 kernel: audit: type=1106 audit(1769193371.228:885): pid=5740 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:11.236365 systemd-logind[1655]: Session 28 logged out. Waiting for processes to exit. Jan 23 18:36:11.241714 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 18:36:11.229000 audit[5740]: CRED_DISP pid=5740 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:11.247395 kernel: audit: type=1104 audit(1769193371.229:886): pid=5740 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 23 18:36:11.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.6.238:22-68.220.241.50:60348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:36:11.249216 systemd-logind[1655]: Removed session 28. Jan 23 18:36:14.710215 kubelet[2940]: E0123 18:36:14.710113 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:36:15.713180 containerd[1695]: time="2026-01-23T18:36:15.713012258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:36:16.214318 containerd[1695]: time="2026-01-23T18:36:16.214265622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:36:16.216013 containerd[1695]: time="2026-01-23T18:36:16.215969687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:36:16.216103 containerd[1695]: time="2026-01-23T18:36:16.216063768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:36:16.216265 kubelet[2940]: E0123 18:36:16.216235 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:36:16.216849 kubelet[2940]: E0123 18:36:16.216516 2940 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:36:16.216849 kubelet[2940]: E0123 18:36:16.216598 2940 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67759dc977-rb8xc_calico-apiserver(2ae423e7-492d-4f77-ad52-275afa909708): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:36:16.216849 kubelet[2940]: E0123 18:36:16.216626 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:36:17.713316 kubelet[2940]: E0123 18:36:17.713203 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:36:20.711080 kubelet[2940]: E0123 18:36:20.710889 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:36:20.712116 kubelet[2940]: E0123 18:36:20.711252 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:36:24.710883 kubelet[2940]: E0123 18:36:24.710494 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:36:28.713777 kubelet[2940]: E0123 18:36:28.713471 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:36:29.711960 kubelet[2940]: E0123 18:36:29.711701 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:36:29.713611 kubelet[2940]: E0123 18:36:29.712737 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:36:31.711425 kubelet[2940]: E0123 18:36:31.711237 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:36:34.711676 kubelet[2940]: E0123 18:36:34.711582 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:36:35.710756 kubelet[2940]: E0123 18:36:35.710646 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:36:36.585443 kubelet[2940]: E0123 18:36:36.585289 2940 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.6.238:40870->10.0.6.147:2379: read: connection timed out" Jan 23 18:36:37.834417 systemd[1]: cri-containerd-cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2.scope: Deactivated successfully. Jan 23 18:36:37.838092 systemd[1]: cri-containerd-cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2.scope: Consumed 3.695s CPU time, 61.6M memory peak. Jan 23 18:36:37.849858 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:36:37.850052 kernel: audit: type=1334 audit(1769193397.839:888): prog-id=98 op=UNLOAD Jan 23 18:36:37.839000 audit: BPF prog-id=98 op=UNLOAD Jan 23 18:36:37.850267 containerd[1695]: time="2026-01-23T18:36:37.847604122Z" level=info msg="received container exit event container_id:\"cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2\" id:\"cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2\" pid:2759 exit_status:1 exited_at:{seconds:1769193397 nanos:846180398}" Jan 23 18:36:37.839000 audit: BPF prog-id=102 op=UNLOAD Jan 23 18:36:37.858870 kernel: audit: type=1334 audit(1769193397.839:889): prog-id=102 op=UNLOAD Jan 23 18:36:37.840000 audit: BPF prog-id=256 op=LOAD Jan 23 18:36:37.868422 kernel: audit: type=1334 audit(1769193397.840:890): prog-id=256 op=LOAD Jan 23 18:36:37.868675 kernel: audit: type=1334 audit(1769193397.840:891): prog-id=83 op=UNLOAD Jan 23 18:36:37.840000 audit: BPF prog-id=83 op=UNLOAD Jan 23 18:36:37.884250 systemd[1]: cri-containerd-a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48.scope: Deactivated successfully. Jan 23 18:36:37.890307 kernel: audit: type=1334 audit(1769193397.885:892): prog-id=150 op=UNLOAD Jan 23 18:36:37.885000 audit: BPF prog-id=150 op=UNLOAD Jan 23 18:36:37.890526 containerd[1695]: time="2026-01-23T18:36:37.888731333Z" level=info msg="received container exit event container_id:\"a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48\" id:\"a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48\" pid:3265 exit_status:1 exited_at:{seconds:1769193397 nanos:885063633}" Jan 23 18:36:37.886333 systemd[1]: cri-containerd-a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48.scope: Consumed 31.074s CPU time, 109.3M memory peak. Jan 23 18:36:37.894966 kernel: audit: type=1334 audit(1769193397.885:893): prog-id=146 op=UNLOAD Jan 23 18:36:37.885000 audit: BPF prog-id=146 op=UNLOAD Jan 23 18:36:37.919378 systemd[1785]: Created slice background.slice - User Background Tasks Slice. Jan 23 18:36:37.920846 systemd[1785]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 23 18:36:37.923659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2-rootfs.mount: Deactivated successfully. Jan 23 18:36:37.954234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48-rootfs.mount: Deactivated successfully. Jan 23 18:36:37.954678 systemd[1785]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 23 18:36:38.599143 kubelet[2940]: I0123 18:36:38.599079 2940 scope.go:117] "RemoveContainer" containerID="a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48" Jan 23 18:36:38.602413 containerd[1695]: time="2026-01-23T18:36:38.602358626Z" level=info msg="CreateContainer within sandbox \"8ea1abaccb7a8e79aa482f00c9551910ba53e49eafb043b204886abc9fd21f4c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 18:36:38.607553 kubelet[2940]: I0123 18:36:38.607429 2940 scope.go:117] "RemoveContainer" containerID="cb11006010ca212c1e4b5185b2ecfc381eb07e7e60a8663d4a6ee313f7138fd2" Jan 23 18:36:38.612871 containerd[1695]: time="2026-01-23T18:36:38.612184842Z" level=info msg="CreateContainer within sandbox \"f82832257feb3c8ee65326002be8cd6ff9e21ad4686064c3da1311cf5f402800\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 18:36:38.623354 containerd[1695]: time="2026-01-23T18:36:38.623203866Z" level=info msg="Container a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:36:38.642593 containerd[1695]: time="2026-01-23T18:36:38.642539783Z" level=info msg="CreateContainer within sandbox \"8ea1abaccb7a8e79aa482f00c9551910ba53e49eafb043b204886abc9fd21f4c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc\"" Jan 23 18:36:38.646171 containerd[1695]: time="2026-01-23T18:36:38.646097425Z" level=info msg="StartContainer for \"a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc\"" Jan 23 18:36:38.647843 containerd[1695]: time="2026-01-23T18:36:38.647199522Z" level=info msg="Container 58d8a2a00c86b8541a75baccb8b48ff91dfd609b1682399e5464d3b2d5b2ec1f: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:36:38.647843 containerd[1695]: time="2026-01-23T18:36:38.647233142Z" level=info msg="connecting to shim a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc" address="unix:///run/containerd/s/459664e11666a3242c51381cc86d46e86e42b1a949ecb28aa1180c0c2bc29fb8" protocol=ttrpc version=3 Jan 23 18:36:38.671565 containerd[1695]: time="2026-01-23T18:36:38.671301802Z" level=info msg="CreateContainer within sandbox \"f82832257feb3c8ee65326002be8cd6ff9e21ad4686064c3da1311cf5f402800\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"58d8a2a00c86b8541a75baccb8b48ff91dfd609b1682399e5464d3b2d5b2ec1f\"" Jan 23 18:36:38.672347 containerd[1695]: time="2026-01-23T18:36:38.672314059Z" level=info msg="StartContainer for \"58d8a2a00c86b8541a75baccb8b48ff91dfd609b1682399e5464d3b2d5b2ec1f\"" Jan 23 18:36:38.674100 systemd[1]: Started cri-containerd-a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc.scope - libcontainer container a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc. Jan 23 18:36:38.677309 containerd[1695]: time="2026-01-23T18:36:38.677254109Z" level=info msg="connecting to shim 58d8a2a00c86b8541a75baccb8b48ff91dfd609b1682399e5464d3b2d5b2ec1f" address="unix:///run/containerd/s/517c923cada8d2a4518c99ee7c7f8d8403d56b8a7dc75e89263d60f0c89d51fa" protocol=ttrpc version=3 Jan 23 18:36:38.692000 audit: BPF prog-id=257 op=LOAD Jan 23 18:36:38.695863 kernel: audit: type=1334 audit(1769193398.692:894): prog-id=257 op=LOAD Jan 23 18:36:38.696142 kernel: audit: type=1334 audit(1769193398.694:895): prog-id=258 op=LOAD Jan 23 18:36:38.694000 audit: BPF prog-id=258 op=LOAD Jan 23 18:36:38.694000 audit[5841]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2999 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.704850 kernel: audit: type=1300 audit(1769193398.694:895): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2999 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130616635316333643834386435326463323965303463333333636436 Jan 23 18:36:38.710963 kernel: audit: type=1327 audit(1769193398.694:895): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130616635316333643834386435326463323965303463333333636436 Jan 23 18:36:38.694000 audit: BPF prog-id=258 op=UNLOAD Jan 23 18:36:38.694000 audit[5841]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2999 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130616635316333643834386435326463323965303463333333636436 Jan 23 18:36:38.697000 audit: BPF prog-id=259 op=LOAD Jan 23 18:36:38.697000 audit[5841]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2999 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130616635316333643834386435326463323965303463333333636436 Jan 23 18:36:38.697000 audit: BPF prog-id=260 op=LOAD Jan 23 18:36:38.697000 audit[5841]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2999 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130616635316333643834386435326463323965303463333333636436 Jan 23 18:36:38.697000 audit: BPF prog-id=260 op=UNLOAD Jan 23 18:36:38.697000 audit[5841]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2999 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130616635316333643834386435326463323965303463333333636436 Jan 23 18:36:38.697000 audit: BPF prog-id=259 op=UNLOAD Jan 23 18:36:38.697000 audit[5841]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2999 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130616635316333643834386435326463323965303463333333636436 Jan 23 18:36:38.697000 audit: BPF prog-id=261 op=LOAD Jan 23 18:36:38.697000 audit[5841]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2999 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130616635316333643834386435326463323965303463333333636436 Jan 23 18:36:38.718002 systemd[1]: Started cri-containerd-58d8a2a00c86b8541a75baccb8b48ff91dfd609b1682399e5464d3b2d5b2ec1f.scope - libcontainer container 58d8a2a00c86b8541a75baccb8b48ff91dfd609b1682399e5464d3b2d5b2ec1f. Jan 23 18:36:38.741000 audit: BPF prog-id=262 op=LOAD Jan 23 18:36:38.742000 audit: BPF prog-id=263 op=LOAD Jan 23 18:36:38.742000 audit[5855]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2619 pid=5855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538643861326130306338366238353431613735626163636238623438 Jan 23 18:36:38.742000 audit: BPF prog-id=263 op=UNLOAD Jan 23 18:36:38.742000 audit[5855]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=5855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538643861326130306338366238353431613735626163636238623438 Jan 23 18:36:38.742000 audit: BPF prog-id=264 op=LOAD Jan 23 18:36:38.742000 audit[5855]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2619 pid=5855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538643861326130306338366238353431613735626163636238623438 Jan 23 18:36:38.742000 audit: BPF prog-id=265 op=LOAD Jan 23 18:36:38.742000 audit[5855]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2619 pid=5855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538643861326130306338366238353431613735626163636238623438 Jan 23 18:36:38.742000 audit: BPF prog-id=265 op=UNLOAD Jan 23 18:36:38.742000 audit[5855]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=5855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538643861326130306338366238353431613735626163636238623438 Jan 23 18:36:38.742000 audit: BPF prog-id=264 op=UNLOAD Jan 23 18:36:38.742000 audit[5855]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=5855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538643861326130306338366238353431613735626163636238623438 Jan 23 18:36:38.742000 audit: BPF prog-id=266 op=LOAD Jan 23 18:36:38.742000 audit[5855]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2619 pid=5855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:38.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538643861326130306338366238353431613735626163636238623438 Jan 23 18:36:38.746826 containerd[1695]: time="2026-01-23T18:36:38.746770638Z" level=info msg="StartContainer for \"a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc\" returns successfully" Jan 23 18:36:38.788934 containerd[1695]: time="2026-01-23T18:36:38.788886866Z" level=info msg="StartContainer for \"58d8a2a00c86b8541a75baccb8b48ff91dfd609b1682399e5464d3b2d5b2ec1f\" returns successfully" Jan 23 18:36:38.919833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556206336.mount: Deactivated successfully. Jan 23 18:36:41.121390 systemd[1]: cri-containerd-3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3.scope: Deactivated successfully. Jan 23 18:36:41.122613 systemd[1]: cri-containerd-3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3.scope: Consumed 2.528s CPU time, 23.5M memory peak. Jan 23 18:36:41.123000 audit: BPF prog-id=103 op=UNLOAD Jan 23 18:36:41.123000 audit: BPF prog-id=107 op=UNLOAD Jan 23 18:36:41.124000 audit: BPF prog-id=267 op=LOAD Jan 23 18:36:41.126074 containerd[1695]: time="2026-01-23T18:36:41.125137675Z" level=info msg="received container exit event container_id:\"3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3\" id:\"3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3\" pid:2781 exit_status:1 exited_at:{seconds:1769193401 nanos:124078828}" Jan 23 18:36:41.125000 audit: BPF prog-id=93 op=UNLOAD Jan 23 18:36:41.161222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3-rootfs.mount: Deactivated successfully. Jan 23 18:36:41.637497 kubelet[2940]: I0123 18:36:41.637168 2940 scope.go:117] "RemoveContainer" containerID="3f503f1b268f8c0f8f2288372bc26d035e0b9bdbae8dda845695897274f5b5b3" Jan 23 18:36:41.640901 containerd[1695]: time="2026-01-23T18:36:41.640804443Z" level=info msg="CreateContainer within sandbox \"63e2fa06b0a1ef29837cc72eaa25e48c15ecb1a872d1213409a52ef99aabafb6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 18:36:41.665282 containerd[1695]: time="2026-01-23T18:36:41.665211686Z" level=info msg="Container aebc52dacb954b21003a63e71b1ffe3b7849e3d2e02a243fcdd1c226d3758a39: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:36:41.679423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3656951215.mount: Deactivated successfully. Jan 23 18:36:41.687305 containerd[1695]: time="2026-01-23T18:36:41.687151307Z" level=info msg="CreateContainer within sandbox \"63e2fa06b0a1ef29837cc72eaa25e48c15ecb1a872d1213409a52ef99aabafb6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"aebc52dacb954b21003a63e71b1ffe3b7849e3d2e02a243fcdd1c226d3758a39\"" Jan 23 18:36:41.688028 containerd[1695]: time="2026-01-23T18:36:41.687992109Z" level=info msg="StartContainer for \"aebc52dacb954b21003a63e71b1ffe3b7849e3d2e02a243fcdd1c226d3758a39\"" Jan 23 18:36:41.690257 containerd[1695]: time="2026-01-23T18:36:41.690214150Z" level=info msg="connecting to shim aebc52dacb954b21003a63e71b1ffe3b7849e3d2e02a243fcdd1c226d3758a39" address="unix:///run/containerd/s/a741b7efbe7a14cebcf779a973836330d98ff938b83fcaa5caafa00feabbe48f" protocol=ttrpc version=3 Jan 23 18:36:41.713850 kubelet[2940]: E0123 18:36:41.713778 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708" Jan 23 18:36:41.715756 kubelet[2940]: E0123 18:36:41.715552 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xmb4s" podUID="d1b9504d-be7a-4b41-b198-d33537aa128d" Jan 23 18:36:41.739145 systemd[1]: Started cri-containerd-aebc52dacb954b21003a63e71b1ffe3b7849e3d2e02a243fcdd1c226d3758a39.scope - libcontainer container aebc52dacb954b21003a63e71b1ffe3b7849e3d2e02a243fcdd1c226d3758a39. Jan 23 18:36:41.763000 audit: BPF prog-id=268 op=LOAD Jan 23 18:36:41.763000 audit: BPF prog-id=269 op=LOAD Jan 23 18:36:41.763000 audit[5916]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2652 pid=5916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:41.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165626335326461636239353462323130303361363365373162316666 Jan 23 18:36:41.764000 audit: BPF prog-id=269 op=UNLOAD Jan 23 18:36:41.764000 audit[5916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=5916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:41.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165626335326461636239353462323130303361363365373162316666 Jan 23 18:36:41.764000 audit: BPF prog-id=270 op=LOAD Jan 23 18:36:41.764000 audit[5916]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2652 pid=5916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:41.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165626335326461636239353462323130303361363365373162316666 Jan 23 18:36:41.764000 audit: BPF prog-id=271 op=LOAD Jan 23 18:36:41.764000 audit[5916]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2652 pid=5916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:41.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165626335326461636239353462323130303361363365373162316666 Jan 23 18:36:41.764000 audit: BPF prog-id=271 op=UNLOAD Jan 23 18:36:41.764000 audit[5916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=5916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:41.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165626335326461636239353462323130303361363365373162316666 Jan 23 18:36:41.764000 audit: BPF prog-id=270 op=UNLOAD Jan 23 18:36:41.764000 audit[5916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=5916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:41.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165626335326461636239353462323130303361363365373162316666 Jan 23 18:36:41.764000 audit: BPF prog-id=272 op=LOAD Jan 23 18:36:41.764000 audit[5916]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2652 pid=5916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:36:41.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165626335326461636239353462323130303361363365373162316666 Jan 23 18:36:41.822928 containerd[1695]: time="2026-01-23T18:36:41.822802791Z" level=info msg="StartContainer for \"aebc52dacb954b21003a63e71b1ffe3b7849e3d2e02a243fcdd1c226d3758a39\" returns successfully" Jan 23 18:36:43.714034 kubelet[2940]: E0123 18:36:43.713554 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p4bbc" podUID="1bbaf1bf-0602-4b75-8639-4ec842393e67" Jan 23 18:36:45.712002 kubelet[2940]: E0123 18:36:45.711942 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-nhm59" podUID="d7423672-957c-488d-baee-8a9e9c290e13" Jan 23 18:36:46.586710 kubelet[2940]: E0123 18:36:46.586620 2940 controller.go:195] "Failed to update lease" err="Put \"https://10.0.6.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-1-0-1-5b0cac0ed6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:36:46.709922 kubelet[2940]: E0123 18:36:46.709875 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65674f688d-kxr2f" podUID="fbf77e94-7d63-4cf9-9744-b692622d727e" Jan 23 18:36:47.711506 kubelet[2940]: E0123 18:36:47.711427 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f9f4c8f6-267pf" podUID="61d5f304-fb8b-48e8-ae8d-711deece6e7e" Jan 23 18:36:50.014432 systemd[1]: cri-containerd-a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc.scope: Deactivated successfully. Jan 23 18:36:50.017810 containerd[1695]: time="2026-01-23T18:36:50.017676761Z" level=info msg="received container exit event container_id:\"a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc\" id:\"a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc\" pid:5854 exit_status:1 exited_at:{seconds:1769193410 nanos:15586625}" Jan 23 18:36:50.018000 audit: BPF prog-id=257 op=UNLOAD Jan 23 18:36:50.021899 kernel: kauditd_printk_skb: 66 callbacks suppressed Jan 23 18:36:50.022068 kernel: audit: type=1334 audit(1769193410.018:922): prog-id=257 op=UNLOAD Jan 23 18:36:50.018000 audit: BPF prog-id=261 op=UNLOAD Jan 23 18:36:50.029845 kernel: audit: type=1334 audit(1769193410.018:923): prog-id=261 op=UNLOAD Jan 23 18:36:50.079947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc-rootfs.mount: Deactivated successfully. Jan 23 18:36:50.676835 kubelet[2940]: I0123 18:36:50.676721 2940 scope.go:117] "RemoveContainer" containerID="a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48" Jan 23 18:36:50.677612 kubelet[2940]: I0123 18:36:50.677397 2940 scope.go:117] "RemoveContainer" containerID="a0af51c3d848d52dc29e04c333cd6c7adf0538128fe1a552d2b73d0e8656a8cc" Jan 23 18:36:50.677780 kubelet[2940]: E0123 18:36:50.677690 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-4njb6_tigera-operator(6a317212-5936-4448-b3a3-f54e06fe9387)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-4njb6" podUID="6a317212-5936-4448-b3a3-f54e06fe9387" Jan 23 18:36:50.680496 containerd[1695]: time="2026-01-23T18:36:50.680448885Z" level=info msg="RemoveContainer for \"a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48\"" Jan 23 18:36:50.689427 containerd[1695]: time="2026-01-23T18:36:50.689308112Z" level=info msg="RemoveContainer for \"a16cecb2a3617454523edec8921e718463684b19265550d06e3ed0ec3f35be48\" returns successfully" Jan 23 18:36:54.710678 kubelet[2940]: E0123 18:36:54.710603 2940 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67759dc977-rb8xc" podUID="2ae423e7-492d-4f77-ad52-275afa909708"