Jan 24 00:34:07.032261 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 21:38:55 -00 2026 Jan 24 00:34:07.032290 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:34:07.032299 kernel: BIOS-provided physical RAM map: Jan 24 00:34:07.032306 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:34:07.032312 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 24 00:34:07.032318 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 24 00:34:07.032328 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 24 00:34:07.032335 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 24 00:34:07.032341 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 24 00:34:07.032347 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 24 00:34:07.032354 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e93efff] usable Jan 24 00:34:07.032360 kernel: BIOS-e820: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 24 00:34:07.032366 kernel: BIOS-e820: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 24 00:34:07.032373 kernel: BIOS-e820: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 24 00:34:07.032383 kernel: BIOS-e820: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 24 00:34:07.032390 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 24 00:34:07.032396 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 24 00:34:07.032403 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 24 00:34:07.032410 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 24 00:34:07.032417 kernel: BIOS-e820: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 24 00:34:07.032425 kernel: BIOS-e820: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 24 00:34:07.032432 kernel: BIOS-e820: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 24 00:34:07.032438 kernel: BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 24 00:34:07.032445 kernel: BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 24 00:34:07.032452 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 24 00:34:07.032458 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:34:07.032465 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 24 00:34:07.032471 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 24 00:34:07.032478 kernel: NX (Execute Disable) protection: active Jan 24 00:34:07.032484 kernel: APIC: Static calls initialized Jan 24 00:34:07.032491 kernel: e820: update [mem 0x7df7f018-0x7df88a57] usable ==> usable Jan 24 00:34:07.032500 kernel: e820: update [mem 0x7df57018-0x7df7e457] usable ==> usable Jan 24 00:34:07.032507 kernel: extended physical RAM map: Jan 24 00:34:07.032513 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:34:07.032520 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 24 00:34:07.032527 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 24 00:34:07.032533 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 24 00:34:07.032540 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 24 00:34:07.032547 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 24 00:34:07.032553 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 24 00:34:07.032565 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007df57017] usable Jan 24 00:34:07.032572 kernel: reserve setup_data: [mem 0x000000007df57018-0x000000007df7e457] usable Jan 24 00:34:07.032579 kernel: reserve setup_data: [mem 0x000000007df7e458-0x000000007df7f017] usable Jan 24 00:34:07.032586 kernel: reserve setup_data: [mem 0x000000007df7f018-0x000000007df88a57] usable Jan 24 00:34:07.032595 kernel: reserve setup_data: [mem 0x000000007df88a58-0x000000007e93efff] usable Jan 24 00:34:07.032602 kernel: reserve setup_data: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 24 00:34:07.032609 kernel: reserve setup_data: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 24 00:34:07.032616 kernel: reserve setup_data: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 24 00:34:07.032623 kernel: reserve setup_data: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 24 00:34:07.032630 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 24 00:34:07.032637 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 24 00:34:07.032644 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 24 00:34:07.032651 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 24 00:34:07.032658 kernel: reserve setup_data: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 24 00:34:07.032664 kernel: reserve setup_data: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 24 00:34:07.032673 kernel: reserve setup_data: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 24 00:34:07.032680 kernel: reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 24 00:34:07.032687 kernel: reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 24 00:34:07.032694 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 24 00:34:07.032701 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:34:07.032708 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 24 00:34:07.032715 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 24 00:34:07.032722 kernel: efi: EFI v2.7 by EDK II Jan 24 00:34:07.032729 kernel: efi: SMBIOS=0x7f972000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7dfd8018 RNG=0x7fb72018 Jan 24 00:34:07.032737 kernel: random: crng init done Jan 24 00:34:07.032744 kernel: efi: Remove mem139: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 24 00:34:07.032752 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 24 00:34:07.032760 kernel: secureboot: Secure boot disabled Jan 24 00:34:07.032767 kernel: SMBIOS 2.8 present. Jan 24 00:34:07.032774 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 24 00:34:07.032781 kernel: DMI: Memory slots populated: 1/1 Jan 24 00:34:07.032788 kernel: Hypervisor detected: KVM Jan 24 00:34:07.032795 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 24 00:34:07.032802 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:34:07.032809 kernel: kvm-clock: using sched offset of 5197459378 cycles Jan 24 00:34:07.032816 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:34:07.032826 kernel: tsc: Detected 2294.608 MHz processor Jan 24 00:34:07.032833 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:34:07.032841 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:34:07.032848 kernel: last_pfn = 0x180000 max_arch_pfn = 0x10000000000 Jan 24 00:34:07.032856 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:34:07.032863 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:34:07.032871 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 24 00:34:07.032878 kernel: Using GB pages for direct mapping Jan 24 00:34:07.032888 kernel: ACPI: Early table checksum verification disabled Jan 24 00:34:07.032896 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 24 00:34:07.032903 kernel: ACPI: XSDT 0x000000007FB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jan 24 00:34:07.032919 kernel: ACPI: FACP 0x000000007FB77000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:34:07.032926 kernel: ACPI: DSDT 0x000000007FB78000 00423C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:34:07.032934 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 24 00:34:07.032941 kernel: ACPI: APIC 0x000000007FB76000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:34:07.032950 kernel: ACPI: MCFG 0x000000007FB75000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:34:07.032958 kernel: ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:34:07.032966 kernel: ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 24 00:34:07.032973 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb77000-0x7fb770f3] Jan 24 00:34:07.032981 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb78000-0x7fb7c23b] Jan 24 00:34:07.032988 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 24 00:34:07.032996 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb76000-0x7fb7607f] Jan 24 00:34:07.033005 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb75000-0x7fb7503b] Jan 24 00:34:07.033013 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027] Jan 24 00:34:07.033020 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037] Jan 24 00:34:07.033028 kernel: No NUMA configuration found Jan 24 00:34:07.033035 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 24 00:34:07.033042 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jan 24 00:34:07.033050 kernel: Zone ranges: Jan 24 00:34:07.033057 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:34:07.033066 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:34:07.033073 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 24 00:34:07.033081 kernel: Device empty Jan 24 00:34:07.033088 kernel: Movable zone start for each node Jan 24 00:34:07.033096 kernel: Early memory node ranges Jan 24 00:34:07.033103 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:34:07.033110 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 24 00:34:07.033118 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 24 00:34:07.033127 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 24 00:34:07.033134 kernel: node 0: [mem 0x0000000000900000-0x000000007e93efff] Jan 24 00:34:07.033142 kernel: node 0: [mem 0x000000007ea00000-0x000000007ec70fff] Jan 24 00:34:07.033149 kernel: node 0: [mem 0x000000007ed85000-0x000000007f8ecfff] Jan 24 00:34:07.033164 kernel: node 0: [mem 0x000000007fbff000-0x000000007feaefff] Jan 24 00:34:07.033173 kernel: node 0: [mem 0x000000007feb5000-0x000000007feebfff] Jan 24 00:34:07.033181 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 24 00:34:07.033189 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 24 00:34:07.033197 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:34:07.033207 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:34:07.033215 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 24 00:34:07.033223 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:34:07.033230 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 24 00:34:07.033238 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 24 00:34:07.033249 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Jan 24 00:34:07.033257 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 24 00:34:07.033265 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 24 00:34:07.033273 kernel: On node 0, zone Normal: 276 pages in unavailable ranges Jan 24 00:34:07.033282 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:34:07.033290 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:34:07.033298 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:34:07.033308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:34:07.033316 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:34:07.033324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:34:07.033332 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:34:07.033340 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:34:07.033348 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:34:07.033356 kernel: TSC deadline timer available Jan 24 00:34:07.033366 kernel: CPU topo: Max. logical packages: 2 Jan 24 00:34:07.033374 kernel: CPU topo: Max. logical dies: 2 Jan 24 00:34:07.033382 kernel: CPU topo: Max. dies per package: 1 Jan 24 00:34:07.033390 kernel: CPU topo: Max. threads per core: 1 Jan 24 00:34:07.033398 kernel: CPU topo: Num. cores per package: 1 Jan 24 00:34:07.033406 kernel: CPU topo: Num. threads per package: 1 Jan 24 00:34:07.033414 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 24 00:34:07.033422 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:34:07.033432 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:34:07.033440 kernel: kvm-guest: setup PV sched yield Jan 24 00:34:07.033448 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 24 00:34:07.033456 kernel: Booting paravirtualized kernel on KVM Jan 24 00:34:07.033464 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:34:07.033473 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:34:07.033481 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 24 00:34:07.033491 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 24 00:34:07.033499 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:34:07.033507 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:34:07.033515 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:34:07.033524 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:34:07.033532 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:34:07.033542 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:34:07.033550 kernel: Fallback order for Node 0: 0 Jan 24 00:34:07.033558 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1046694 Jan 24 00:34:07.033566 kernel: Policy zone: Normal Jan 24 00:34:07.033574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:34:07.033582 kernel: software IO TLB: area num 2. Jan 24 00:34:07.033590 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:34:07.033600 kernel: ftrace: allocating 40128 entries in 157 pages Jan 24 00:34:07.033608 kernel: ftrace: allocated 157 pages with 5 groups Jan 24 00:34:07.033616 kernel: Dynamic Preempt: voluntary Jan 24 00:34:07.033624 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:34:07.033633 kernel: rcu: RCU event tracing is enabled. Jan 24 00:34:07.033641 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:34:07.033649 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:34:07.033657 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:34:07.033668 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:34:07.033676 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:34:07.033684 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:34:07.033692 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:34:07.033700 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:34:07.033708 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:34:07.033716 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:34:07.033726 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:34:07.033734 kernel: Console: colour dummy device 80x25 Jan 24 00:34:07.033742 kernel: printk: legacy console [tty0] enabled Jan 24 00:34:07.033750 kernel: printk: legacy console [ttyS0] enabled Jan 24 00:34:07.033758 kernel: ACPI: Core revision 20240827 Jan 24 00:34:07.033766 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:34:07.033775 kernel: x2apic enabled Jan 24 00:34:07.033785 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:34:07.033793 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:34:07.033801 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:34:07.033809 kernel: kvm-guest: setup PV IPIs Jan 24 00:34:07.033817 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 24 00:34:07.033825 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 24 00:34:07.033833 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:34:07.033843 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 24 00:34:07.033851 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 24 00:34:07.033859 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:34:07.033866 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 24 00:34:07.033874 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 24 00:34:07.033882 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 24 00:34:07.033889 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:34:07.033897 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:34:07.033905 kernel: TAA: Mitigation: Clear CPU buffers Jan 24 00:34:07.035159 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 24 00:34:07.035169 kernel: active return thunk: its_return_thunk Jan 24 00:34:07.035180 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:34:07.035188 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:34:07.035205 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:34:07.035213 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:34:07.035221 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:34:07.035229 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:34:07.035237 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:34:07.035244 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:34:07.035252 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:34:07.035262 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:34:07.035269 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:34:07.035277 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:34:07.035285 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 24 00:34:07.035292 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 24 00:34:07.035300 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:34:07.035307 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:34:07.035315 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 24 00:34:07.035323 kernel: landlock: Up and running. Jan 24 00:34:07.035331 kernel: SELinux: Initializing. Jan 24 00:34:07.035338 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:34:07.035348 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:34:07.035356 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Jan 24 00:34:07.035364 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Jan 24 00:34:07.035373 kernel: ... version: 2 Jan 24 00:34:07.035381 kernel: ... bit width: 48 Jan 24 00:34:07.035389 kernel: ... generic registers: 8 Jan 24 00:34:07.035397 kernel: ... value mask: 0000ffffffffffff Jan 24 00:34:07.035405 kernel: ... max period: 00007fffffffffff Jan 24 00:34:07.035416 kernel: ... fixed-purpose events: 3 Jan 24 00:34:07.035424 kernel: ... event mask: 00000007000000ff Jan 24 00:34:07.035432 kernel: signal: max sigframe size: 3632 Jan 24 00:34:07.035440 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:34:07.035449 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:34:07.035457 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 24 00:34:07.035465 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:34:07.035474 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:34:07.035484 kernel: .... node #0, CPUs: #1 Jan 24 00:34:07.035492 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:34:07.035500 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 24 00:34:07.035509 kernel: Memory: 3969764K/4186776K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 212136K reserved, 0K cma-reserved) Jan 24 00:34:07.035517 kernel: devtmpfs: initialized Jan 24 00:34:07.035525 kernel: x86/mm: Memory block size: 128MB Jan 24 00:34:07.035533 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 24 00:34:07.035543 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 24 00:34:07.035551 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 24 00:34:07.035559 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 24 00:34:07.035567 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feb3000-0x7feb4fff] (8192 bytes) Jan 24 00:34:07.035575 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes) Jan 24 00:34:07.035584 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:34:07.035594 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:34:07.035602 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:34:07.035610 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:34:07.035618 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:34:07.035626 kernel: audit: type=2000 audit(1769214843.963:1): state=initialized audit_enabled=0 res=1 Jan 24 00:34:07.035635 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:34:07.035643 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:34:07.035651 kernel: cpuidle: using governor menu Jan 24 00:34:07.035661 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:34:07.035669 kernel: dca service started, version 1.12.1 Jan 24 00:34:07.035677 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 24 00:34:07.035685 kernel: PCI: Using configuration type 1 for base access Jan 24 00:34:07.035694 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:34:07.035702 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:34:07.035710 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:34:07.035720 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:34:07.035728 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:34:07.035736 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:34:07.035744 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:34:07.035752 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:34:07.035760 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:34:07.035768 kernel: ACPI: Interpreter enabled Jan 24 00:34:07.035778 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:34:07.035786 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:34:07.035794 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:34:07.035802 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:34:07.035811 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:34:07.035819 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:34:07.035986 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:34:07.036096 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:34:07.036194 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:34:07.036204 kernel: PCI host bridge to bus 0000:00 Jan 24 00:34:07.036304 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:34:07.036393 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:34:07.036483 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:34:07.036570 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 24 00:34:07.036656 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 24 00:34:07.036743 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Jan 24 00:34:07.036865 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:34:07.037103 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 24 00:34:07.037223 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 24 00:34:07.037356 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 24 00:34:07.037458 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Jan 24 00:34:07.037554 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Jan 24 00:34:07.037649 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 24 00:34:07.037747 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:34:07.037854 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.037964 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Jan 24 00:34:07.038061 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 24 00:34:07.038157 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 24 00:34:07.038254 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 24 00:34:07.038354 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 24 00:34:07.038455 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.038551 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Jan 24 00:34:07.038645 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 24 00:34:07.038739 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 24 00:34:07.038837 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 24 00:34:07.038945 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.039043 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Jan 24 00:34:07.039138 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 24 00:34:07.039244 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 24 00:34:07.039800 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 24 00:34:07.039931 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.040032 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Jan 24 00:34:07.040130 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 24 00:34:07.040228 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 24 00:34:07.040324 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 24 00:34:07.040426 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.040534 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Jan 24 00:34:07.040634 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 24 00:34:07.040730 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 24 00:34:07.040826 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 24 00:34:07.041947 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.042065 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Jan 24 00:34:07.042169 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 24 00:34:07.042268 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 24 00:34:07.042365 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 24 00:34:07.042469 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.042568 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Jan 24 00:34:07.042669 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 24 00:34:07.042765 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 24 00:34:07.042860 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 24 00:34:07.042974 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.043072 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Jan 24 00:34:07.043170 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 24 00:34:07.043284 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 24 00:34:07.043382 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 24 00:34:07.043488 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.043586 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Jan 24 00:34:07.043684 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 24 00:34:07.043780 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 24 00:34:07.043879 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 24 00:34:07.045198 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.045326 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Jan 24 00:34:07.045426 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 24 00:34:07.045524 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 24 00:34:07.045620 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 24 00:34:07.045726 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.045823 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Jan 24 00:34:07.047952 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 24 00:34:07.048085 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 24 00:34:07.048187 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 24 00:34:07.048291 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.048393 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Jan 24 00:34:07.048493 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 24 00:34:07.048588 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 24 00:34:07.048686 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 24 00:34:07.048788 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.048885 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Jan 24 00:34:07.048993 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 24 00:34:07.049088 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 24 00:34:07.049182 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 24 00:34:07.049291 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.049390 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Jan 24 00:34:07.049488 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 24 00:34:07.049583 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 24 00:34:07.049679 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 24 00:34:07.049780 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.049878 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Jan 24 00:34:07.050395 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 24 00:34:07.050498 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 24 00:34:07.050594 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 24 00:34:07.050697 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.050794 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Jan 24 00:34:07.050894 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 24 00:34:07.051000 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 24 00:34:07.051096 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 24 00:34:07.051207 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.051320 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Jan 24 00:34:07.051418 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 24 00:34:07.051516 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 24 00:34:07.051611 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 24 00:34:07.051712 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.051807 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Jan 24 00:34:07.051902 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 24 00:34:07.052037 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 24 00:34:07.052136 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 24 00:34:07.052238 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.052335 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Jan 24 00:34:07.052430 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 24 00:34:07.052524 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 24 00:34:07.052619 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 24 00:34:07.052721 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.052816 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Jan 24 00:34:07.052919 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 24 00:34:07.053015 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 24 00:34:07.053109 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 24 00:34:07.053208 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.053307 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Jan 24 00:34:07.053400 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 24 00:34:07.053494 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 24 00:34:07.053589 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 24 00:34:07.053687 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.053783 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Jan 24 00:34:07.053879 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 24 00:34:07.053981 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 24 00:34:07.054075 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 24 00:34:07.054174 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.054272 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Jan 24 00:34:07.054366 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 24 00:34:07.054459 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 24 00:34:07.054553 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 24 00:34:07.054654 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.054752 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Jan 24 00:34:07.054845 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 24 00:34:07.054952 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 24 00:34:07.055086 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 24 00:34:07.055188 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.055298 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Jan 24 00:34:07.055395 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 24 00:34:07.055490 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 24 00:34:07.055584 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 24 00:34:07.055683 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.055778 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Jan 24 00:34:07.055872 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 24 00:34:07.055984 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 24 00:34:07.056078 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 24 00:34:07.056178 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.056273 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Jan 24 00:34:07.056368 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 24 00:34:07.056461 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:34:07.056561 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 24 00:34:07.056663 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.056759 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Jan 24 00:34:07.056855 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 24 00:34:07.056959 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:34:07.057055 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 24 00:34:07.057159 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 24 00:34:07.057255 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Jan 24 00:34:07.057359 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 24 00:34:07.057454 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:34:07.057548 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 24 00:34:07.057647 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 24 00:34:07.057744 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:34:07.057846 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 24 00:34:07.057951 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Jan 24 00:34:07.058047 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Jan 24 00:34:07.058147 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 24 00:34:07.058246 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Jan 24 00:34:07.058349 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 24 00:34:07.058447 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Jan 24 00:34:07.058545 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 24 00:34:07.058642 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 24 00:34:07.058738 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 24 00:34:07.058838 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 24 00:34:07.058941 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 24 00:34:07.059044 kernel: pci_bus 0000:02: extended config space not accessible Jan 24 00:34:07.059057 kernel: acpiphp: Slot [1] registered Jan 24 00:34:07.059066 kernel: acpiphp: Slot [0] registered Jan 24 00:34:07.059075 kernel: acpiphp: Slot [2] registered Jan 24 00:34:07.059085 kernel: acpiphp: Slot [3] registered Jan 24 00:34:07.059094 kernel: acpiphp: Slot [4] registered Jan 24 00:34:07.059102 kernel: acpiphp: Slot [5] registered Jan 24 00:34:07.059111 kernel: acpiphp: Slot [6] registered Jan 24 00:34:07.059119 kernel: acpiphp: Slot [7] registered Jan 24 00:34:07.059128 kernel: acpiphp: Slot [8] registered Jan 24 00:34:07.059136 kernel: acpiphp: Slot [9] registered Jan 24 00:34:07.059147 kernel: acpiphp: Slot [10] registered Jan 24 00:34:07.059155 kernel: acpiphp: Slot [11] registered Jan 24 00:34:07.059164 kernel: acpiphp: Slot [12] registered Jan 24 00:34:07.059172 kernel: acpiphp: Slot [13] registered Jan 24 00:34:07.059180 kernel: acpiphp: Slot [14] registered Jan 24 00:34:07.059189 kernel: acpiphp: Slot [15] registered Jan 24 00:34:07.059214 kernel: acpiphp: Slot [16] registered Jan 24 00:34:07.059222 kernel: acpiphp: Slot [17] registered Jan 24 00:34:07.059233 kernel: acpiphp: Slot [18] registered Jan 24 00:34:07.059241 kernel: acpiphp: Slot [19] registered Jan 24 00:34:07.059250 kernel: acpiphp: Slot [20] registered Jan 24 00:34:07.059258 kernel: acpiphp: Slot [21] registered Jan 24 00:34:07.059266 kernel: acpiphp: Slot [22] registered Jan 24 00:34:07.059275 kernel: acpiphp: Slot [23] registered Jan 24 00:34:07.059283 kernel: acpiphp: Slot [24] registered Jan 24 00:34:07.059293 kernel: acpiphp: Slot [25] registered Jan 24 00:34:07.059302 kernel: acpiphp: Slot [26] registered Jan 24 00:34:07.059310 kernel: acpiphp: Slot [27] registered Jan 24 00:34:07.059319 kernel: acpiphp: Slot [28] registered Jan 24 00:34:07.059327 kernel: acpiphp: Slot [29] registered Jan 24 00:34:07.059336 kernel: acpiphp: Slot [30] registered Jan 24 00:34:07.059344 kernel: acpiphp: Slot [31] registered Jan 24 00:34:07.059454 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 24 00:34:07.059561 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Jan 24 00:34:07.059661 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 24 00:34:07.059672 kernel: acpiphp: Slot [0-2] registered Jan 24 00:34:07.059777 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 24 00:34:07.059875 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Jan 24 00:34:07.059986 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Jan 24 00:34:07.060088 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 24 00:34:07.060186 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 24 00:34:07.060197 kernel: acpiphp: Slot [0-3] registered Jan 24 00:34:07.060299 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 24 00:34:07.060398 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Jan 24 00:34:07.060498 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Jan 24 00:34:07.060594 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 24 00:34:07.060605 kernel: acpiphp: Slot [0-4] registered Jan 24 00:34:07.060705 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 24 00:34:07.060804 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Jan 24 00:34:07.060901 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 24 00:34:07.060924 kernel: acpiphp: Slot [0-5] registered Jan 24 00:34:07.061025 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 24 00:34:07.061124 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Jan 24 00:34:07.061221 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Jan 24 00:34:07.061317 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 24 00:34:07.061329 kernel: acpiphp: Slot [0-6] registered Jan 24 00:34:07.061425 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 24 00:34:07.061436 kernel: acpiphp: Slot [0-7] registered Jan 24 00:34:07.061531 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 24 00:34:07.061542 kernel: acpiphp: Slot [0-8] registered Jan 24 00:34:07.061637 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 24 00:34:07.061649 kernel: acpiphp: Slot [0-9] registered Jan 24 00:34:07.061743 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 24 00:34:07.061757 kernel: acpiphp: Slot [0-10] registered Jan 24 00:34:07.061851 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 24 00:34:07.061862 kernel: acpiphp: Slot [0-11] registered Jan 24 00:34:07.061962 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 24 00:34:07.061974 kernel: acpiphp: Slot [0-12] registered Jan 24 00:34:07.062068 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 24 00:34:07.062081 kernel: acpiphp: Slot [0-13] registered Jan 24 00:34:07.062175 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 24 00:34:07.062187 kernel: acpiphp: Slot [0-14] registered Jan 24 00:34:07.062280 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 24 00:34:07.062291 kernel: acpiphp: Slot [0-15] registered Jan 24 00:34:07.062383 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 24 00:34:07.062396 kernel: acpiphp: Slot [0-16] registered Jan 24 00:34:07.062490 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 24 00:34:07.062501 kernel: acpiphp: Slot [0-17] registered Jan 24 00:34:07.062593 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 24 00:34:07.062605 kernel: acpiphp: Slot [0-18] registered Jan 24 00:34:07.062698 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 24 00:34:07.062709 kernel: acpiphp: Slot [0-19] registered Jan 24 00:34:07.062804 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 24 00:34:07.062816 kernel: acpiphp: Slot [0-20] registered Jan 24 00:34:07.062916 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 24 00:34:07.062927 kernel: acpiphp: Slot [0-21] registered Jan 24 00:34:07.063022 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 24 00:34:07.063033 kernel: acpiphp: Slot [0-22] registered Jan 24 00:34:07.063128 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 24 00:34:07.063140 kernel: acpiphp: Slot [0-23] registered Jan 24 00:34:07.063243 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 24 00:34:07.063254 kernel: acpiphp: Slot [0-24] registered Jan 24 00:34:07.063348 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 24 00:34:07.063358 kernel: acpiphp: Slot [0-25] registered Jan 24 00:34:07.063451 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 24 00:34:07.064931 kernel: acpiphp: Slot [0-26] registered Jan 24 00:34:07.065076 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 24 00:34:07.065090 kernel: acpiphp: Slot [0-27] registered Jan 24 00:34:07.065190 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 24 00:34:07.065201 kernel: acpiphp: Slot [0-28] registered Jan 24 00:34:07.065298 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 24 00:34:07.065313 kernel: acpiphp: Slot [0-29] registered Jan 24 00:34:07.065409 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 24 00:34:07.065420 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:34:07.065428 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:34:07.065437 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:34:07.065446 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:34:07.065455 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:34:07.065465 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:34:07.065474 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:34:07.065482 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:34:07.065491 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:34:07.065500 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:34:07.065508 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:34:07.065517 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:34:07.065527 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:34:07.065535 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:34:07.065544 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:34:07.065552 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:34:07.065560 kernel: iommu: Default domain type: Translated Jan 24 00:34:07.065569 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:34:07.065577 kernel: efivars: Registered efivars operations Jan 24 00:34:07.065588 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:34:07.065596 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:34:07.065606 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 24 00:34:07.065614 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 24 00:34:07.065622 kernel: e820: reserve RAM buffer [mem 0x7df57018-0x7fffffff] Jan 24 00:34:07.065631 kernel: e820: reserve RAM buffer [mem 0x7df7f018-0x7fffffff] Jan 24 00:34:07.065639 kernel: e820: reserve RAM buffer [mem 0x7e93f000-0x7fffffff] Jan 24 00:34:07.065650 kernel: e820: reserve RAM buffer [mem 0x7ec71000-0x7fffffff] Jan 24 00:34:07.065658 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 24 00:34:07.065666 kernel: e820: reserve RAM buffer [mem 0x7feaf000-0x7fffffff] Jan 24 00:34:07.065675 kernel: e820: reserve RAM buffer [mem 0x7feec000-0x7fffffff] Jan 24 00:34:07.065773 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:34:07.065868 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:34:07.066009 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:34:07.066023 kernel: vgaarb: loaded Jan 24 00:34:07.066032 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:34:07.066041 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:34:07.066049 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:34:07.066058 kernel: pnp: PnP ACPI init Jan 24 00:34:07.066169 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 24 00:34:07.066182 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:34:07.066193 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:34:07.066202 kernel: NET: Registered PF_INET protocol family Jan 24 00:34:07.066211 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:34:07.066219 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:34:07.066228 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:34:07.066237 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:34:07.066245 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:34:07.066256 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:34:07.066264 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:34:07.066273 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:34:07.066282 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:34:07.066291 kernel: NET: Registered PF_XDP protocol family Jan 24 00:34:07.066393 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 24 00:34:07.066493 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 24 00:34:07.066590 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 24 00:34:07.066689 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 24 00:34:07.066784 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 24 00:34:07.066880 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 24 00:34:07.066991 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 24 00:34:07.067089 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 24 00:34:07.067189 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 24 00:34:07.067297 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 24 00:34:07.067395 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 24 00:34:07.067491 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 24 00:34:07.067588 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 24 00:34:07.067685 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 24 00:34:07.067784 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 24 00:34:07.067882 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 24 00:34:07.068473 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 24 00:34:07.068574 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 24 00:34:07.068670 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 24 00:34:07.068766 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 24 00:34:07.068867 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 24 00:34:07.069480 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 24 00:34:07.069583 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 24 00:34:07.069683 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 24 00:34:07.069779 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 24 00:34:07.069874 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 24 00:34:07.069980 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 24 00:34:07.070084 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 24 00:34:07.070180 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 24 00:34:07.070277 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jan 24 00:34:07.070373 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jan 24 00:34:07.070470 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jan 24 00:34:07.070566 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jan 24 00:34:07.070665 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jan 24 00:34:07.070761 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Jan 24 00:34:07.070857 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Jan 24 00:34:07.071018 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Jan 24 00:34:07.071116 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Jan 24 00:34:07.071221 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Jan 24 00:34:07.071318 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Jan 24 00:34:07.071419 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Jan 24 00:34:07.071515 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Jan 24 00:34:07.071611 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.071706 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.071802 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.071896 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.072010 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.072105 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.072200 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.072295 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.072390 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.072485 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.072582 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.072676 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.072771 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.072866 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.072978 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.073073 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.073169 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.073266 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.073361 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.073455 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.073550 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.073644 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.073738 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.073835 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.073943 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.074039 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.074133 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.074228 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.074322 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.074419 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.074513 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Jan 24 00:34:07.074606 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Jan 24 00:34:07.074701 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Jan 24 00:34:07.074795 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Jan 24 00:34:07.074890 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Jan 24 00:34:07.074993 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Jan 24 00:34:07.075091 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Jan 24 00:34:07.075186 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Jan 24 00:34:07.075288 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Jan 24 00:34:07.075383 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Jan 24 00:34:07.075477 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Jan 24 00:34:07.075571 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Jan 24 00:34:07.075668 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Jan 24 00:34:07.075764 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.076091 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.076190 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.076285 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.076380 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.076474 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.076572 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.076666 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.076761 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.076855 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.076971 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.077069 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.077163 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.077261 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.077356 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.077730 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.077830 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.077935 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.079990 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.080115 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.080216 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.080312 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.080410 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.080505 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.080601 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.080698 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.080794 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.080889 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.081004 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 24 00:34:07.081099 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 24 00:34:07.081202 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 24 00:34:07.081299 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 24 00:34:07.081399 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 24 00:34:07.081497 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 24 00:34:07.081593 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 24 00:34:07.081689 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 24 00:34:07.081783 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 24 00:34:07.081877 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 24 00:34:07.081985 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Jan 24 00:34:07.082083 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 24 00:34:07.082177 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 24 00:34:07.082272 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 24 00:34:07.082367 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 24 00:34:07.082461 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 24 00:34:07.082556 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 24 00:34:07.082651 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 24 00:34:07.082745 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 24 00:34:07.082844 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 24 00:34:07.083985 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 24 00:34:07.084095 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 24 00:34:07.084192 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 24 00:34:07.084289 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 24 00:34:07.084383 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 24 00:34:07.084480 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 24 00:34:07.084580 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 24 00:34:07.084674 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 24 00:34:07.084769 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 24 00:34:07.084863 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 24 00:34:07.084975 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 24 00:34:07.085072 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 24 00:34:07.085170 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 24 00:34:07.085264 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 24 00:34:07.085359 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 24 00:34:07.085454 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 24 00:34:07.085548 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 24 00:34:07.085642 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 24 00:34:07.085738 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 24 00:34:07.085835 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 24 00:34:07.088636 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 24 00:34:07.088764 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 24 00:34:07.088865 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 24 00:34:07.089382 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 24 00:34:07.089486 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 24 00:34:07.089585 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 24 00:34:07.089682 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 24 00:34:07.089784 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 24 00:34:07.089880 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 24 00:34:07.089989 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 24 00:34:07.090087 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 24 00:34:07.090183 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 24 00:34:07.090279 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 24 00:34:07.090375 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 24 00:34:07.090474 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 24 00:34:07.090568 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 24 00:34:07.090664 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 24 00:34:07.090758 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Jan 24 00:34:07.090853 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 24 00:34:07.090965 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 24 00:34:07.091065 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 24 00:34:07.091159 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Jan 24 00:34:07.091265 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 24 00:34:07.091361 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 24 00:34:07.091457 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 24 00:34:07.091552 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Jan 24 00:34:07.091668 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 24 00:34:07.091765 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 24 00:34:07.091862 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 24 00:34:07.091986 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Jan 24 00:34:07.092081 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 24 00:34:07.092176 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 24 00:34:07.092274 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 24 00:34:07.092369 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Jan 24 00:34:07.092463 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 24 00:34:07.092557 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 24 00:34:07.092654 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 24 00:34:07.092749 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Jan 24 00:34:07.092846 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 24 00:34:07.092947 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 24 00:34:07.093050 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 24 00:34:07.093155 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Jan 24 00:34:07.093250 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 24 00:34:07.093345 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 24 00:34:07.093444 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 24 00:34:07.093538 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Jan 24 00:34:07.093633 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 24 00:34:07.093727 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 24 00:34:07.093825 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 24 00:34:07.093943 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Jan 24 00:34:07.094044 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 24 00:34:07.094138 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 24 00:34:07.094235 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 24 00:34:07.094330 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Jan 24 00:34:07.094425 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 24 00:34:07.094519 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 24 00:34:07.094619 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 24 00:34:07.094715 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Jan 24 00:34:07.094810 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 24 00:34:07.094905 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 24 00:34:07.095017 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 24 00:34:07.095112 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Jan 24 00:34:07.095216 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 24 00:34:07.095315 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 24 00:34:07.095413 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 24 00:34:07.095507 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Jan 24 00:34:07.095602 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 24 00:34:07.095696 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 24 00:34:07.095794 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:34:07.095883 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:34:07.095984 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:34:07.096071 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 24 00:34:07.096157 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 24 00:34:07.096243 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Jan 24 00:34:07.096341 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Jan 24 00:34:07.096434 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Jan 24 00:34:07.096523 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 24 00:34:07.096619 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Jan 24 00:34:07.096711 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Jan 24 00:34:07.096802 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 24 00:34:07.096898 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Jan 24 00:34:07.097012 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 24 00:34:07.097106 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Jan 24 00:34:07.097196 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 24 00:34:07.097292 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Jan 24 00:34:07.097385 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 24 00:34:07.097479 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Jan 24 00:34:07.097568 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 24 00:34:07.097661 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Jan 24 00:34:07.097749 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 24 00:34:07.097844 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Jan 24 00:34:07.097944 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 24 00:34:07.098039 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Jan 24 00:34:07.098128 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 24 00:34:07.098221 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Jan 24 00:34:07.098310 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 24 00:34:07.098406 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Jan 24 00:34:07.098495 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 24 00:34:07.098588 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Jan 24 00:34:07.098677 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 24 00:34:07.098776 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Jan 24 00:34:07.098865 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 24 00:34:07.098967 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Jan 24 00:34:07.099057 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 24 00:34:07.099151 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Jan 24 00:34:07.099249 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 24 00:34:07.099345 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Jan 24 00:34:07.099435 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 24 00:34:07.099529 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Jan 24 00:34:07.099619 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 24 00:34:07.099715 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Jan 24 00:34:07.099808 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Jan 24 00:34:07.099898 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 24 00:34:07.100000 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Jan 24 00:34:07.100090 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Jan 24 00:34:07.100178 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 24 00:34:07.100274 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Jan 24 00:34:07.100363 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Jan 24 00:34:07.100452 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 24 00:34:07.100546 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Jan 24 00:34:07.100636 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Jan 24 00:34:07.100725 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 24 00:34:07.100824 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Jan 24 00:34:07.100920 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Jan 24 00:34:07.101017 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 24 00:34:07.101111 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Jan 24 00:34:07.101200 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Jan 24 00:34:07.101288 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 24 00:34:07.101385 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Jan 24 00:34:07.101473 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Jan 24 00:34:07.101562 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 24 00:34:07.101656 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Jan 24 00:34:07.101746 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Jan 24 00:34:07.101838 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 24 00:34:07.101942 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Jan 24 00:34:07.102033 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Jan 24 00:34:07.102122 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 24 00:34:07.102217 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Jan 24 00:34:07.102307 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Jan 24 00:34:07.102399 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 24 00:34:07.102494 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Jan 24 00:34:07.102584 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Jan 24 00:34:07.102673 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 24 00:34:07.102766 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Jan 24 00:34:07.102855 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Jan 24 00:34:07.102953 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 24 00:34:07.103050 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Jan 24 00:34:07.103148 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Jan 24 00:34:07.103249 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 24 00:34:07.103262 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:34:07.103274 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:34:07.103284 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:34:07.103293 kernel: software IO TLB: mapped [mem 0x0000000077ede000-0x000000007bede000] (64MB) Jan 24 00:34:07.103302 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:34:07.103311 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 24 00:34:07.103320 kernel: Initialise system trusted keyrings Jan 24 00:34:07.103329 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:34:07.103340 kernel: Key type asymmetric registered Jan 24 00:34:07.103349 kernel: Asymmetric key parser 'x509' registered Jan 24 00:34:07.103357 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 24 00:34:07.103366 kernel: io scheduler mq-deadline registered Jan 24 00:34:07.103374 kernel: io scheduler kyber registered Jan 24 00:34:07.103384 kernel: io scheduler bfq registered Jan 24 00:34:07.103485 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 24 00:34:07.103588 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 24 00:34:07.103687 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 24 00:34:07.103783 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 24 00:34:07.103881 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 24 00:34:07.103998 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 24 00:34:07.104106 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 24 00:34:07.104234 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 24 00:34:07.104332 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 24 00:34:07.104427 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 24 00:34:07.104525 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 24 00:34:07.104623 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 24 00:34:07.104720 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 24 00:34:07.104816 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 24 00:34:07.104927 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 24 00:34:07.105026 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 24 00:34:07.105040 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:34:07.105135 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 24 00:34:07.105232 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 24 00:34:07.105328 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Jan 24 00:34:07.105424 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Jan 24 00:34:07.105524 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Jan 24 00:34:07.105620 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Jan 24 00:34:07.105717 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Jan 24 00:34:07.105812 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Jan 24 00:34:07.105907 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Jan 24 00:34:07.106015 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Jan 24 00:34:07.106112 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Jan 24 00:34:07.106207 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Jan 24 00:34:07.106304 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Jan 24 00:34:07.106400 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Jan 24 00:34:07.106499 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Jan 24 00:34:07.106595 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Jan 24 00:34:07.106606 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:34:07.106700 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Jan 24 00:34:07.106795 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Jan 24 00:34:07.106891 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Jan 24 00:34:07.109051 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Jan 24 00:34:07.109175 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Jan 24 00:34:07.109277 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Jan 24 00:34:07.109375 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Jan 24 00:34:07.109472 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Jan 24 00:34:07.109569 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Jan 24 00:34:07.109666 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Jan 24 00:34:07.109762 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Jan 24 00:34:07.109862 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Jan 24 00:34:07.109969 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Jan 24 00:34:07.110069 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Jan 24 00:34:07.110167 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Jan 24 00:34:07.110265 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Jan 24 00:34:07.110277 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 24 00:34:07.110376 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Jan 24 00:34:07.110473 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Jan 24 00:34:07.110570 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Jan 24 00:34:07.110666 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Jan 24 00:34:07.110763 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Jan 24 00:34:07.110858 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Jan 24 00:34:07.110976 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Jan 24 00:34:07.111077 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Jan 24 00:34:07.111173 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Jan 24 00:34:07.111318 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Jan 24 00:34:07.111330 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:34:07.111339 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:34:07.111348 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:34:07.111358 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:34:07.111369 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:34:07.111378 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:34:07.111484 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 00:34:07.111496 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:34:07.111586 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 00:34:07.111677 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T00:34:05 UTC (1769214845) Jan 24 00:34:07.111770 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 24 00:34:07.111781 kernel: intel_pstate: CPU model not supported Jan 24 00:34:07.111790 kernel: efifb: probing for efifb Jan 24 00:34:07.111799 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 24 00:34:07.111809 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 24 00:34:07.111817 kernel: efifb: scrolling: redraw Jan 24 00:34:07.111826 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:34:07.111837 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:34:07.111845 kernel: fb0: EFI VGA frame buffer device Jan 24 00:34:07.111854 kernel: pstore: Using crash dump compression: deflate Jan 24 00:34:07.111863 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:34:07.111871 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:34:07.111880 kernel: Segment Routing with IPv6 Jan 24 00:34:07.111889 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:34:07.111897 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:34:07.111927 kernel: Key type dns_resolver registered Jan 24 00:34:07.111937 kernel: IPI shorthand broadcast: enabled Jan 24 00:34:07.111946 kernel: sched_clock: Marking stable (2370001968, 153450706)->(2791359671, -267906997) Jan 24 00:34:07.111954 kernel: registered taskstats version 1 Jan 24 00:34:07.111963 kernel: Loading compiled-in X.509 certificates Jan 24 00:34:07.111972 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 08600fac738f210e3b32f727339edfe2b1af2e3d' Jan 24 00:34:07.111980 kernel: Demotion targets for Node 0: null Jan 24 00:34:07.111991 kernel: Key type .fscrypt registered Jan 24 00:34:07.111999 kernel: Key type fscrypt-provisioning registered Jan 24 00:34:07.112008 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:34:07.112016 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:34:07.112025 kernel: ima: No architecture policies found Jan 24 00:34:07.112033 kernel: clk: Disabling unused clocks Jan 24 00:34:07.112042 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 24 00:34:07.112053 kernel: Write protecting the kernel read-only data: 47104k Jan 24 00:34:07.112061 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 24 00:34:07.112070 kernel: Run /init as init process Jan 24 00:34:07.112078 kernel: with arguments: Jan 24 00:34:07.112087 kernel: /init Jan 24 00:34:07.112096 kernel: with environment: Jan 24 00:34:07.112104 kernel: HOME=/ Jan 24 00:34:07.112113 kernel: TERM=linux Jan 24 00:34:07.112124 kernel: SCSI subsystem initialized Jan 24 00:34:07.112132 kernel: libata version 3.00 loaded. Jan 24 00:34:07.112234 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:34:07.112246 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:34:07.112342 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 24 00:34:07.112439 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 24 00:34:07.112538 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:34:07.112647 kernel: scsi host0: ahci Jan 24 00:34:07.112749 kernel: scsi host1: ahci Jan 24 00:34:07.112871 kernel: scsi host2: ahci Jan 24 00:34:07.113893 kernel: scsi host3: ahci Jan 24 00:34:07.114018 kernel: scsi host4: ahci Jan 24 00:34:07.114126 kernel: scsi host5: ahci Jan 24 00:34:07.114139 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 55 lpm-pol 1 Jan 24 00:34:07.114148 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 55 lpm-pol 1 Jan 24 00:34:07.114157 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 55 lpm-pol 1 Jan 24 00:34:07.114165 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 55 lpm-pol 1 Jan 24 00:34:07.114174 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 55 lpm-pol 1 Jan 24 00:34:07.114186 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 55 lpm-pol 1 Jan 24 00:34:07.114195 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:34:07.114204 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:34:07.114213 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:34:07.114222 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:34:07.114230 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 00:34:07.114239 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:34:07.114250 kernel: ACPI: bus type USB registered Jan 24 00:34:07.114259 kernel: usbcore: registered new interface driver usbfs Jan 24 00:34:07.114268 kernel: usbcore: registered new interface driver hub Jan 24 00:34:07.114276 kernel: usbcore: registered new device driver usb Jan 24 00:34:07.114383 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Jan 24 00:34:07.114487 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Jan 24 00:34:07.114589 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Jan 24 00:34:07.114693 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Jan 24 00:34:07.114827 kernel: hub 1-0:1.0: USB hub found Jan 24 00:34:07.114953 kernel: hub 1-0:1.0: 2 ports detected Jan 24 00:34:07.115062 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 24 00:34:07.115161 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 24 00:34:07.115175 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:34:07.115185 kernel: GPT:25804799 != 104857599 Jan 24 00:34:07.115202 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:34:07.115211 kernel: GPT:25804799 != 104857599 Jan 24 00:34:07.115220 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:34:07.115229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:34:07.115238 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:34:07.115249 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:34:07.115258 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 24 00:34:07.115268 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 24 00:34:07.115277 kernel: raid6: avx512x4 gen() 43032 MB/s Jan 24 00:34:07.115285 kernel: raid6: avx512x2 gen() 46297 MB/s Jan 24 00:34:07.115294 kernel: raid6: avx512x1 gen() 44424 MB/s Jan 24 00:34:07.115303 kernel: raid6: avx2x4 gen() 34725 MB/s Jan 24 00:34:07.115313 kernel: raid6: avx2x2 gen() 33991 MB/s Jan 24 00:34:07.115322 kernel: raid6: avx2x1 gen() 30559 MB/s Jan 24 00:34:07.115331 kernel: raid6: using algorithm avx512x2 gen() 46297 MB/s Jan 24 00:34:07.115340 kernel: raid6: .... xor() 26777 MB/s, rmw enabled Jan 24 00:34:07.115350 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:34:07.115359 kernel: xor: automatically using best checksumming function avx Jan 24 00:34:07.115485 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Jan 24 00:34:07.115499 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:34:07.115507 kernel: BTRFS: device fsid 091bfa4a-922a-4e6e-abc1-a4b74083975f devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (203) Jan 24 00:34:07.115517 kernel: BTRFS info (device dm-0): first mount of filesystem 091bfa4a-922a-4e6e-abc1-a4b74083975f Jan 24 00:34:07.115525 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:34:07.115534 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:34:07.115545 kernel: BTRFS info (device dm-0): enabling free space tree Jan 24 00:34:07.115554 kernel: loop: module loaded Jan 24 00:34:07.115563 kernel: loop0: detected capacity change from 0 to 100560 Jan 24 00:34:07.115572 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:34:07.115583 systemd[1]: Successfully made /usr/ read-only. Jan 24 00:34:07.115595 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:34:07.115606 systemd[1]: Detected virtualization kvm. Jan 24 00:34:07.115615 systemd[1]: Detected architecture x86-64. Jan 24 00:34:07.115624 systemd[1]: Running in initrd. Jan 24 00:34:07.115633 systemd[1]: No hostname configured, using default hostname. Jan 24 00:34:07.115642 systemd[1]: Hostname set to . Jan 24 00:34:07.115651 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 00:34:07.115660 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:34:07.115672 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:34:07.115681 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:34:07.115690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:34:07.115701 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:34:07.115710 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:34:07.115719 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:34:07.115730 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:34:07.115739 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:34:07.115748 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:34:07.115758 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:34:07.115767 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:34:07.115776 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:34:07.115787 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:34:07.115796 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:34:07.115805 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:34:07.115815 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:34:07.115824 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:34:07.115833 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:34:07.115842 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 24 00:34:07.115853 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:34:07.115862 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:34:07.115871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:34:07.115880 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:34:07.115890 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:34:07.115899 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:34:07.116266 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:34:07.116283 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:34:07.116293 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 24 00:34:07.116303 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:34:07.116312 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:34:07.116321 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:34:07.116332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:07.116341 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:34:07.116351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:34:07.116360 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:34:07.116369 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:34:07.116405 systemd-journald[340]: Collecting audit messages is enabled. Jan 24 00:34:07.116428 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:34:07.116438 kernel: Bridge firewalling registered Jan 24 00:34:07.116449 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:34:07.116459 kernel: audit: type=1130 audit(1769214847.051:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.116471 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:34:07.116481 kernel: audit: type=1130 audit(1769214847.058:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.116491 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:34:07.116500 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:34:07.116511 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:07.116520 kernel: audit: type=1130 audit(1769214847.078:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.116529 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:34:07.116538 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:34:07.116548 kernel: audit: type=1130 audit(1769214847.096:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.116557 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:34:07.116566 kernel: audit: type=1130 audit(1769214847.105:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.116577 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:34:07.116586 kernel: audit: type=1334 audit(1769214847.110:7): prog-id=6 op=LOAD Jan 24 00:34:07.116596 systemd-journald[340]: Journal started Jan 24 00:34:07.116617 systemd-journald[340]: Runtime Journal (/run/log/journal/a49ad875df124bed8df5a021cbd91266) is 8M, max 77.9M, 69.9M free. Jan 24 00:34:07.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.110000 audit: BPF prog-id=6 op=LOAD Jan 24 00:34:07.049282 systemd-modules-load[343]: Inserted module 'br_netfilter' Jan 24 00:34:07.120990 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:34:07.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.125943 kernel: audit: type=1130 audit(1769214847.121:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.127540 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:34:07.132962 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:34:07.139665 kernel: audit: type=1130 audit(1769214847.135:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.142201 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:34:07.148473 systemd-tmpfiles[374]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 24 00:34:07.161054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:34:07.167512 kernel: audit: type=1130 audit(1769214847.160:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.165475 systemd-resolved[356]: Positive Trust Anchors: Jan 24 00:34:07.165482 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:34:07.165485 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:34:07.165516 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:34:07.180952 dracut-cmdline[377]: dracut-109 Jan 24 00:34:07.185884 dracut-cmdline[377]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:34:07.194544 systemd-resolved[356]: Defaulting to hostname 'linux'. Jan 24 00:34:07.195490 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:34:07.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.196442 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:34:07.267942 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:34:07.282933 kernel: iscsi: registered transport (tcp) Jan 24 00:34:07.305972 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:34:07.306055 kernel: QLogic iSCSI HBA Driver Jan 24 00:34:07.332479 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:34:07.362272 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:34:07.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.364276 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:34:07.411728 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:34:07.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.413719 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:34:07.414747 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:34:07.448655 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:34:07.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.448000 audit: BPF prog-id=7 op=LOAD Jan 24 00:34:07.448000 audit: BPF prog-id=8 op=LOAD Jan 24 00:34:07.450297 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:34:07.476093 systemd-udevd[606]: Using default interface naming scheme 'v257'. Jan 24 00:34:07.485126 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:34:07.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.488646 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:34:07.513660 dracut-pre-trigger[666]: rd.md=0: removing MD RAID activation Jan 24 00:34:07.525893 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:34:07.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.527000 audit: BPF prog-id=9 op=LOAD Jan 24 00:34:07.530083 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:34:07.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.540745 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:34:07.543018 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:34:07.571504 systemd-networkd[736]: lo: Link UP Jan 24 00:34:07.572185 systemd-networkd[736]: lo: Gained carrier Jan 24 00:34:07.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.572576 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:34:07.575030 systemd[1]: Reached target network.target - Network. Jan 24 00:34:07.631062 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:34:07.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.634331 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:34:07.712111 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:34:07.723840 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:34:07.747334 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:34:07.758104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:34:07.761491 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:34:07.772925 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:34:07.785810 disk-uuid[785]: Primary Header is updated. Jan 24 00:34:07.785810 disk-uuid[785]: Secondary Entries is updated. Jan 24 00:34:07.785810 disk-uuid[785]: Secondary Header is updated. Jan 24 00:34:07.813932 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:34:07.825214 kernel: AES CTR mode by8 optimization enabled Jan 24 00:34:07.825453 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:34:07.825556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:07.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.827770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:07.831071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:07.860299 kernel: usbcore: registered new interface driver usbhid Jan 24 00:34:07.860358 kernel: usbhid: USB HID core driver Jan 24 00:34:07.868772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:34:07.869381 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 24 00:34:07.870975 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:07.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.875540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:07.896949 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 24 00:34:07.900604 systemd-networkd[736]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:34:07.901757 systemd-networkd[736]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:34:07.903540 systemd-networkd[736]: eth0: Link UP Jan 24 00:34:07.903685 systemd-networkd[736]: eth0: Gained carrier Jan 24 00:34:07.903696 systemd-networkd[736]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:34:07.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.908190 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:07.916071 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Jan 24 00:34:07.970441 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:34:07.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:07.971580 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:34:07.972716 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:34:07.973184 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:34:07.974922 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:34:07.989662 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:34:07.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:08.034008 systemd-networkd[736]: eth0: DHCPv4 address 10.0.1.115/25, gateway 10.0.1.1 acquired from 10.0.1.1 Jan 24 00:34:08.833369 disk-uuid[786]: Warning: The kernel is still using the old partition table. Jan 24 00:34:08.833369 disk-uuid[786]: The new table will be used at the next reboot or after you Jan 24 00:34:08.833369 disk-uuid[786]: run partprobe(8) or kpartx(8) Jan 24 00:34:08.833369 disk-uuid[786]: The operation has completed successfully. Jan 24 00:34:08.847176 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:34:08.847478 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:34:08.857823 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 24 00:34:08.857853 kernel: audit: type=1130 audit(1769214848.848:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:08.857867 kernel: audit: type=1131 audit(1769214848.848:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:08.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:08.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:08.852504 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:34:08.899930 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (918) Jan 24 00:34:08.904600 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:34:08.904646 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:34:08.910444 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:34:08.910490 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:34:08.917927 kernel: BTRFS info (device vda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:34:08.918303 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:34:08.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:08.922036 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:34:08.925152 kernel: audit: type=1130 audit(1769214848.918:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:09.122588 ignition[937]: Ignition 2.24.0 Jan 24 00:34:09.123452 ignition[937]: Stage: fetch-offline Jan 24 00:34:09.124602 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:34:09.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:09.123495 ignition[937]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:09.130045 kernel: audit: type=1130 audit(1769214849.124:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:09.123505 ignition[937]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 00:34:09.129593 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:34:09.123577 ignition[937]: parsed url from cmdline: "" Jan 24 00:34:09.123581 ignition[937]: no config URL provided Jan 24 00:34:09.123585 ignition[937]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:34:09.123592 ignition[937]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:34:09.123596 ignition[937]: failed to fetch config: resource requires networking Jan 24 00:34:09.123729 ignition[937]: Ignition finished successfully Jan 24 00:34:09.156478 ignition[945]: Ignition 2.24.0 Jan 24 00:34:09.156491 ignition[945]: Stage: fetch Jan 24 00:34:09.156632 ignition[945]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:09.156640 ignition[945]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 00:34:09.156725 ignition[945]: parsed url from cmdline: "" Jan 24 00:34:09.156728 ignition[945]: no config URL provided Jan 24 00:34:09.156737 ignition[945]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:34:09.156742 ignition[945]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:34:09.156809 ignition[945]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 24 00:34:09.157028 ignition[945]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 24 00:34:09.157048 ignition[945]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 24 00:34:09.580263 systemd-networkd[736]: eth0: Gained IPv6LL Jan 24 00:34:10.157259 ignition[945]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 24 00:34:10.157329 ignition[945]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 24 00:34:10.223838 ignition[945]: GET result: OK Jan 24 00:34:10.224013 ignition[945]: parsing config with SHA512: 2c1466df41c381229ef7da2c212e835dad06444620c63b3f8b9a79b06be28fd7e63eb74655ed2ac7aae1bab5f253155d5d4b78826f135137d9dc05bf78bbabc1 Jan 24 00:34:10.230340 unknown[945]: fetched base config from "system" Jan 24 00:34:10.230348 unknown[945]: fetched base config from "system" Jan 24 00:34:10.231195 ignition[945]: fetch: fetch complete Jan 24 00:34:10.230353 unknown[945]: fetched user config from "openstack" Jan 24 00:34:10.231200 ignition[945]: fetch: fetch passed Jan 24 00:34:10.231248 ignition[945]: Ignition finished successfully Jan 24 00:34:10.233631 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:34:10.237538 kernel: audit: type=1130 audit(1769214850.233:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.237074 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:34:10.265107 ignition[951]: Ignition 2.24.0 Jan 24 00:34:10.265118 ignition[951]: Stage: kargs Jan 24 00:34:10.265279 ignition[951]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:10.265288 ignition[951]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 00:34:10.266148 ignition[951]: kargs: kargs passed Jan 24 00:34:10.266186 ignition[951]: Ignition finished successfully Jan 24 00:34:10.268217 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:34:10.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.271022 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:34:10.272947 kernel: audit: type=1130 audit(1769214850.268:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.315505 ignition[957]: Ignition 2.24.0 Jan 24 00:34:10.315519 ignition[957]: Stage: disks Jan 24 00:34:10.315699 ignition[957]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:10.315709 ignition[957]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 00:34:10.316574 ignition[957]: disks: disks passed Jan 24 00:34:10.316624 ignition[957]: Ignition finished successfully Jan 24 00:34:10.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.318697 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:34:10.319491 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:34:10.325016 kernel: audit: type=1130 audit(1769214850.318:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.323987 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:34:10.324456 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:34:10.324762 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:34:10.325135 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:34:10.326635 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:34:10.365493 systemd-fsck[965]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 24 00:34:10.367625 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:34:10.371794 kernel: audit: type=1130 audit(1769214850.367:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.371017 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:34:10.510940 kernel: EXT4-fs (vda9): mounted filesystem 4e30a7d6-83d2-471c-98e0-68a57c0656af r/w with ordered data mode. Quota mode: none. Jan 24 00:34:10.511688 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:34:10.513238 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:34:10.517046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:34:10.519995 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:34:10.521101 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:34:10.532035 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 24 00:34:10.532598 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:34:10.532626 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:34:10.536834 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:34:10.540301 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:34:10.548054 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (973) Jan 24 00:34:10.551242 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:34:10.551284 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:34:10.560752 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:34:10.560791 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:34:10.562254 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:34:10.626938 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:10.752746 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:34:10.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.757008 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:34:10.759069 kernel: audit: type=1130 audit(1769214850.753:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.760290 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:34:10.777003 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:34:10.780955 kernel: BTRFS info (device vda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:34:10.803558 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:34:10.804356 ignition[1074]: INFO : Ignition 2.24.0 Jan 24 00:34:10.804356 ignition[1074]: INFO : Stage: mount Jan 24 00:34:10.804356 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:10.804356 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 00:34:10.809849 kernel: audit: type=1130 audit(1769214850.805:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:10.806193 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:34:10.810448 ignition[1074]: INFO : mount: mount passed Jan 24 00:34:10.810448 ignition[1074]: INFO : Ignition finished successfully Jan 24 00:34:11.659954 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:13.672949 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:17.686965 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:17.695663 coreos-metadata[975]: Jan 24 00:34:17.695 WARN failed to locate config-drive, using the metadata service API instead Jan 24 00:34:17.727693 coreos-metadata[975]: Jan 24 00:34:17.727 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 24 00:34:18.481787 coreos-metadata[975]: Jan 24 00:34:18.481 INFO Fetch successful Jan 24 00:34:18.481787 coreos-metadata[975]: Jan 24 00:34:18.481 INFO wrote hostname ci-4593-0-0-7-bbab233dcd to /sysroot/etc/hostname Jan 24 00:34:18.511078 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:34:18.511188 kernel: audit: type=1130 audit(1769214858.484:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:18.511232 kernel: audit: type=1131 audit(1769214858.484:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:18.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:18.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:18.484579 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 24 00:34:18.484743 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 24 00:34:18.489058 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:34:18.533487 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:34:18.574957 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1091) Jan 24 00:34:18.578813 kernel: BTRFS info (device vda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:34:18.578862 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:34:18.584493 kernel: BTRFS info (device vda6): turning on async discard Jan 24 00:34:18.584552 kernel: BTRFS info (device vda6): enabling free space tree Jan 24 00:34:18.586067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:34:18.612307 ignition[1109]: INFO : Ignition 2.24.0 Jan 24 00:34:18.612307 ignition[1109]: INFO : Stage: files Jan 24 00:34:18.613611 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:18.613611 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 00:34:18.613611 ignition[1109]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:34:18.614934 ignition[1109]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:34:18.614934 ignition[1109]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:34:18.619639 ignition[1109]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:34:18.621411 ignition[1109]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:34:18.621411 ignition[1109]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:34:18.620592 unknown[1109]: wrote ssh authorized keys file for user: core Jan 24 00:34:18.624624 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:34:18.625478 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:34:18.688803 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:34:18.815460 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:34:18.815460 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:34:18.817184 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:34:18.817184 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:34:18.817184 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:34:18.817184 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:34:18.817184 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:34:18.817184 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:34:18.817184 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:34:18.819853 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:34:18.819853 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:34:18.819853 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:34:18.821436 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:34:18.821436 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:34:18.821436 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:34:19.061948 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:34:19.646480 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:34:19.646480 ignition[1109]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:34:19.648191 ignition[1109]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:34:19.650569 ignition[1109]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:34:19.650569 ignition[1109]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:34:19.650569 ignition[1109]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:34:19.653900 ignition[1109]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:34:19.653900 ignition[1109]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:34:19.653900 ignition[1109]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:34:19.653900 ignition[1109]: INFO : files: files passed Jan 24 00:34:19.653900 ignition[1109]: INFO : Ignition finished successfully Jan 24 00:34:19.660633 kernel: audit: type=1130 audit(1769214859.653:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.654241 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:34:19.655601 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:34:19.660863 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:34:19.667631 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:34:19.667717 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:34:19.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.674462 kernel: audit: type=1130 audit(1769214859.669:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.674492 kernel: audit: type=1131 audit(1769214859.670:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.681592 initrd-setup-root-after-ignition[1140]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:34:19.681592 initrd-setup-root-after-ignition[1140]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:34:19.683616 initrd-setup-root-after-ignition[1144]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:34:19.685474 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:34:19.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.686745 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:34:19.691052 kernel: audit: type=1130 audit(1769214859.685:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.691899 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:34:19.731890 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:34:19.732014 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:34:19.740173 kernel: audit: type=1130 audit(1769214859.732:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.740199 kernel: audit: type=1131 audit(1769214859.732:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.733342 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:34:19.740635 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:34:19.741657 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:34:19.742499 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:34:19.779673 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:34:19.784172 kernel: audit: type=1130 audit(1769214859.779:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.783029 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:34:19.795017 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:34:19.795223 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:34:19.796841 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:34:19.797835 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:34:19.798964 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:34:19.799560 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:34:19.803690 kernel: audit: type=1131 audit(1769214859.799:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.803771 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:34:19.804371 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:34:19.805288 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:34:19.806129 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:34:19.807015 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:34:19.807856 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:34:19.808728 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:34:19.809532 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:34:19.810410 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:34:19.811260 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:34:19.812105 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:34:19.812900 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:34:19.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.813036 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:34:19.814114 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:34:19.814571 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:34:19.815511 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:34:19.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.815618 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:34:19.816291 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:34:19.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.816411 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:34:19.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.817469 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:34:19.817583 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:34:19.818380 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:34:19.818480 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:34:19.821082 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:34:19.821499 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:34:19.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.821626 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:34:19.824956 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:34:19.825553 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:34:19.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.825681 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:34:19.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.826428 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:34:19.826534 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:34:19.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.827143 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:34:19.827445 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:34:19.834227 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:34:19.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.834308 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:34:19.851036 ignition[1164]: INFO : Ignition 2.24.0 Jan 24 00:34:19.851036 ignition[1164]: INFO : Stage: umount Jan 24 00:34:19.851036 ignition[1164]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:34:19.851036 ignition[1164]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 00:34:19.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.850181 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:34:19.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.856560 ignition[1164]: INFO : umount: umount passed Jan 24 00:34:19.856560 ignition[1164]: INFO : Ignition finished successfully Jan 24 00:34:19.852089 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:34:19.852185 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:34:19.853170 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:34:19.853213 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:34:19.853661 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:34:19.853697 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:34:19.854141 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:34:19.854178 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:34:19.854578 systemd[1]: Stopped target network.target - Network. Jan 24 00:34:19.854960 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:34:19.854997 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:34:19.856189 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:34:19.856849 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:34:19.859945 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:34:19.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.860505 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:34:19.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.861161 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:34:19.861812 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:34:19.861841 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:34:19.862496 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:34:19.862525 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:34:19.863165 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 24 00:34:19.863188 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:34:19.863805 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:34:19.863845 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:34:19.864476 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:34:19.864508 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:34:19.865244 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:34:19.866119 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:34:19.871780 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:34:19.871897 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:34:19.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.877000 audit: BPF prog-id=6 op=UNLOAD Jan 24 00:34:19.878874 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:34:19.878984 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:34:19.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.880000 audit: BPF prog-id=9 op=UNLOAD Jan 24 00:34:19.881494 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 24 00:34:19.882898 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:34:19.882957 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:34:19.884043 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:34:19.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.885432 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:34:19.885481 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:34:19.887388 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:34:19.887435 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:34:19.888980 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:34:19.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.889018 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:34:19.889853 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:34:19.890832 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:34:19.896174 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:34:19.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.897477 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:34:19.897947 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:34:19.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.900029 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:34:19.900561 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:34:19.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.901531 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:34:19.901956 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:34:19.902672 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:34:19.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.902802 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:34:19.903394 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:34:19.903431 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:34:19.903830 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:34:19.903860 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:34:19.904273 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:34:19.904305 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:34:19.907507 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:34:19.908959 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 24 00:34:19.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.909009 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:34:19.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.909704 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:34:19.909741 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:34:19.911014 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:34:19.911057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:19.925580 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:34:19.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.926422 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:34:19.929373 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:34:19.929927 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:34:19.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:19.931375 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:34:19.933129 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:34:19.948804 systemd[1]: Switching root. Jan 24 00:34:20.007262 systemd-journald[340]: Journal stopped Jan 24 00:34:21.021715 systemd-journald[340]: Received SIGTERM from PID 1 (systemd). Jan 24 00:34:21.021800 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:34:21.021822 kernel: SELinux: policy capability open_perms=1 Jan 24 00:34:21.021834 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:34:21.021847 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:34:21.021858 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:34:21.021869 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:34:21.021880 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:34:21.021896 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:34:21.021938 kernel: SELinux: policy capability userspace_initial_context=0 Jan 24 00:34:21.021959 systemd[1]: Successfully loaded SELinux policy in 62.372ms. Jan 24 00:34:21.021976 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.418ms. Jan 24 00:34:21.021989 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:34:21.022001 systemd[1]: Detected virtualization kvm. Jan 24 00:34:21.022013 systemd[1]: Detected architecture x86-64. Jan 24 00:34:21.022026 systemd[1]: Detected first boot. Jan 24 00:34:21.022037 systemd[1]: Hostname set to . Jan 24 00:34:21.022051 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 24 00:34:21.022069 zram_generator::config[1208]: No configuration found. Jan 24 00:34:21.022082 kernel: Guest personality initialized and is inactive Jan 24 00:34:21.022093 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 24 00:34:21.022103 kernel: Initialized host personality Jan 24 00:34:21.022116 kernel: NET: Registered PF_VSOCK protocol family Jan 24 00:34:21.022128 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:34:21.022140 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:34:21.022152 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:34:21.022167 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:34:21.022180 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:34:21.022192 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:34:21.022204 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:34:21.022215 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:34:21.022227 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:34:21.022238 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:34:21.022249 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:34:21.022262 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:34:21.022276 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:34:21.022287 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:34:21.022298 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:34:21.022309 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:34:21.022321 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:34:21.022333 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:34:21.022346 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:34:21.022358 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:34:21.022371 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:34:21.022383 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:34:21.022395 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:34:21.022406 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:34:21.022419 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:34:21.022430 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:34:21.022443 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:34:21.022454 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 24 00:34:21.022465 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:34:21.022476 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:34:21.022487 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:34:21.022499 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:34:21.022512 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 24 00:34:21.022523 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:34:21.022535 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 24 00:34:21.022546 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:34:21.022558 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 24 00:34:21.022569 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 24 00:34:21.022581 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:34:21.022593 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:34:21.022604 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:34:21.022615 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:34:21.022627 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:34:21.022638 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:34:21.022649 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:21.022661 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:34:21.022675 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:34:21.022690 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:34:21.022703 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:34:21.022714 systemd[1]: Reached target machines.target - Containers. Jan 24 00:34:21.022725 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:34:21.022736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:34:21.022749 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:34:21.022761 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:34:21.022772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:34:21.022784 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:34:21.022796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:34:21.022808 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:34:21.022819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:34:21.022830 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:34:21.022841 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:34:21.022852 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:34:21.022864 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:34:21.022877 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:34:21.022890 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:34:21.022902 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:34:21.023936 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:34:21.023996 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:34:21.024010 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:34:21.024026 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 24 00:34:21.024040 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:34:21.024053 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:21.024065 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:34:21.024077 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:34:21.024092 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:34:21.024103 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:34:21.024115 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:34:21.024127 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:34:21.024140 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:34:21.024151 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:34:21.024162 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:34:21.024175 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:34:21.024187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:34:21.024198 kernel: ACPI: bus type drm_connector registered Jan 24 00:34:21.024210 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:34:21.024221 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:34:21.024232 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:34:21.024244 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:34:21.024257 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:34:21.024269 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:34:21.024298 systemd-journald[1283]: Collecting audit messages is enabled. Jan 24 00:34:21.024322 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:34:21.024333 kernel: fuse: init (API version 7.41) Jan 24 00:34:21.024344 systemd-journald[1283]: Journal started Jan 24 00:34:21.024368 systemd-journald[1283]: Runtime Journal (/run/log/journal/a49ad875df124bed8df5a021cbd91266) is 8M, max 77.9M, 69.9M free. Jan 24 00:34:20.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:20.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:20.933000 audit: BPF prog-id=14 op=UNLOAD Jan 24 00:34:20.933000 audit: BPF prog-id=13 op=UNLOAD Jan 24 00:34:20.934000 audit: BPF prog-id=15 op=LOAD Jan 24 00:34:20.934000 audit: BPF prog-id=16 op=LOAD Jan 24 00:34:20.934000 audit: BPF prog-id=17 op=LOAD Jan 24 00:34:20.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:20.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:20.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.018000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 24 00:34:21.018000 audit[1283]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc4cb70c50 a2=4000 a3=0 items=0 ppid=1 pid=1283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:21.018000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 24 00:34:21.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:20.745203 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:34:20.770927 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:34:20.771349 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:34:21.026935 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:34:21.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.028959 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:34:21.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.030927 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:34:21.032879 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:34:21.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.033573 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:34:21.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.034291 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 24 00:34:21.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.045765 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:34:21.048195 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 24 00:34:21.048674 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:34:21.048700 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:34:21.049876 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 24 00:34:21.051094 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:34:21.051189 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:34:21.053081 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:34:21.054225 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:34:21.054998 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:34:21.057067 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:34:21.057498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:34:21.060069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:34:21.062081 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:34:21.078275 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:34:21.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.084926 kernel: loop1: detected capacity change from 0 to 50784 Jan 24 00:34:21.084974 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:34:21.089565 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:34:21.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.090742 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:34:21.092116 systemd-journald[1283]: Time spent on flushing to /var/log/journal/a49ad875df124bed8df5a021cbd91266 is 56.617ms for 1845 entries. Jan 24 00:34:21.092116 systemd-journald[1283]: System Journal (/var/log/journal/a49ad875df124bed8df5a021cbd91266) is 8M, max 588.1M, 580.1M free. Jan 24 00:34:21.166385 systemd-journald[1283]: Received client request to flush runtime journal. Jan 24 00:34:21.166435 kernel: loop2: detected capacity change from 0 to 1656 Jan 24 00:34:21.166458 kernel: loop3: detected capacity change from 0 to 224512 Jan 24 00:34:21.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.097077 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 24 00:34:21.119030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:34:21.168772 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:34:21.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.173112 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:34:21.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.174000 audit: BPF prog-id=18 op=LOAD Jan 24 00:34:21.175000 audit: BPF prog-id=19 op=LOAD Jan 24 00:34:21.175000 audit: BPF prog-id=20 op=LOAD Jan 24 00:34:21.179089 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 24 00:34:21.181000 audit: BPF prog-id=21 op=LOAD Jan 24 00:34:21.183053 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:34:21.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.185378 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:34:21.187372 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 24 00:34:21.192000 audit: BPF prog-id=22 op=LOAD Jan 24 00:34:21.192000 audit: BPF prog-id=23 op=LOAD Jan 24 00:34:21.192000 audit: BPF prog-id=24 op=LOAD Jan 24 00:34:21.195107 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 24 00:34:21.195000 audit: BPF prog-id=25 op=LOAD Jan 24 00:34:21.196000 audit: BPF prog-id=26 op=LOAD Jan 24 00:34:21.196000 audit: BPF prog-id=27 op=LOAD Jan 24 00:34:21.198669 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:34:21.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.212079 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:34:21.222117 kernel: loop4: detected capacity change from 0 to 111560 Jan 24 00:34:21.238866 systemd-nsresourced[1353]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 24 00:34:21.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.240293 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 24 00:34:21.248077 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Jan 24 00:34:21.248092 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Jan 24 00:34:21.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.253300 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:34:21.259212 kernel: loop5: detected capacity change from 0 to 50784 Jan 24 00:34:21.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.275865 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:34:21.295167 kernel: loop6: detected capacity change from 0 to 1656 Jan 24 00:34:21.314941 kernel: loop7: detected capacity change from 0 to 224512 Jan 24 00:34:21.338737 systemd-resolved[1350]: Positive Trust Anchors: Jan 24 00:34:21.338750 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:34:21.338754 systemd-resolved[1350]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:34:21.338784 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:34:21.345971 kernel: loop1: detected capacity change from 0 to 111560 Jan 24 00:34:21.349227 systemd-oomd[1348]: No swap; memory pressure usage will be degraded Jan 24 00:34:21.349639 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 24 00:34:21.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.364717 systemd-resolved[1350]: Using system hostname 'ci-4593-0-0-7-bbab233dcd'. Jan 24 00:34:21.365766 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:34:21.366451 (sd-merge)[1366]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Jan 24 00:34:21.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.368048 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:34:21.370600 (sd-merge)[1366]: Merged extensions into '/usr'. Jan 24 00:34:21.375876 systemd[1]: Reload requested from client PID 1330 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:34:21.375890 systemd[1]: Reloading... Jan 24 00:34:21.439943 zram_generator::config[1399]: No configuration found. Jan 24 00:34:21.620058 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:34:21.620317 systemd[1]: Reloading finished in 244 ms. Jan 24 00:34:21.638045 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:34:21.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.647044 systemd[1]: Starting ensure-sysext.service... Jan 24 00:34:21.648387 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:34:21.650000 audit: BPF prog-id=28 op=LOAD Jan 24 00:34:21.650000 audit: BPF prog-id=21 op=UNLOAD Jan 24 00:34:21.650000 audit: BPF prog-id=29 op=LOAD Jan 24 00:34:21.650000 audit: BPF prog-id=18 op=UNLOAD Jan 24 00:34:21.650000 audit: BPF prog-id=30 op=LOAD Jan 24 00:34:21.650000 audit: BPF prog-id=31 op=LOAD Jan 24 00:34:21.650000 audit: BPF prog-id=19 op=UNLOAD Jan 24 00:34:21.650000 audit: BPF prog-id=20 op=UNLOAD Jan 24 00:34:21.651000 audit: BPF prog-id=32 op=LOAD Jan 24 00:34:21.656000 audit: BPF prog-id=25 op=UNLOAD Jan 24 00:34:21.656000 audit: BPF prog-id=33 op=LOAD Jan 24 00:34:21.656000 audit: BPF prog-id=34 op=LOAD Jan 24 00:34:21.656000 audit: BPF prog-id=26 op=UNLOAD Jan 24 00:34:21.656000 audit: BPF prog-id=27 op=UNLOAD Jan 24 00:34:21.657000 audit: BPF prog-id=35 op=LOAD Jan 24 00:34:21.657000 audit: BPF prog-id=15 op=UNLOAD Jan 24 00:34:21.657000 audit: BPF prog-id=36 op=LOAD Jan 24 00:34:21.657000 audit: BPF prog-id=37 op=LOAD Jan 24 00:34:21.657000 audit: BPF prog-id=16 op=UNLOAD Jan 24 00:34:21.657000 audit: BPF prog-id=17 op=UNLOAD Jan 24 00:34:21.658000 audit: BPF prog-id=38 op=LOAD Jan 24 00:34:21.658000 audit: BPF prog-id=22 op=UNLOAD Jan 24 00:34:21.658000 audit: BPF prog-id=39 op=LOAD Jan 24 00:34:21.658000 audit: BPF prog-id=40 op=LOAD Jan 24 00:34:21.658000 audit: BPF prog-id=23 op=UNLOAD Jan 24 00:34:21.658000 audit: BPF prog-id=24 op=UNLOAD Jan 24 00:34:21.672045 systemd[1]: Reload requested from client PID 1445 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:34:21.672062 systemd[1]: Reloading... Jan 24 00:34:21.672906 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 24 00:34:21.672941 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 24 00:34:21.673127 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:34:21.674547 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Jan 24 00:34:21.674648 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Jan 24 00:34:21.682769 systemd-tmpfiles[1446]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:34:21.682883 systemd-tmpfiles[1446]: Skipping /boot Jan 24 00:34:21.690523 systemd-tmpfiles[1446]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:34:21.690602 systemd-tmpfiles[1446]: Skipping /boot Jan 24 00:34:21.737951 zram_generator::config[1478]: No configuration found. Jan 24 00:34:21.890249 systemd[1]: Reloading finished in 217 ms. Jan 24 00:34:21.899624 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:34:21.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.900000 audit: BPF prog-id=41 op=LOAD Jan 24 00:34:21.900000 audit: BPF prog-id=32 op=UNLOAD Jan 24 00:34:21.900000 audit: BPF prog-id=42 op=LOAD Jan 24 00:34:21.900000 audit: BPF prog-id=43 op=LOAD Jan 24 00:34:21.900000 audit: BPF prog-id=33 op=UNLOAD Jan 24 00:34:21.900000 audit: BPF prog-id=34 op=UNLOAD Jan 24 00:34:21.901000 audit: BPF prog-id=44 op=LOAD Jan 24 00:34:21.901000 audit: BPF prog-id=29 op=UNLOAD Jan 24 00:34:21.901000 audit: BPF prog-id=45 op=LOAD Jan 24 00:34:21.901000 audit: BPF prog-id=46 op=LOAD Jan 24 00:34:21.901000 audit: BPF prog-id=30 op=UNLOAD Jan 24 00:34:21.901000 audit: BPF prog-id=31 op=UNLOAD Jan 24 00:34:21.902000 audit: BPF prog-id=47 op=LOAD Jan 24 00:34:21.902000 audit: BPF prog-id=38 op=UNLOAD Jan 24 00:34:21.902000 audit: BPF prog-id=48 op=LOAD Jan 24 00:34:21.902000 audit: BPF prog-id=49 op=LOAD Jan 24 00:34:21.902000 audit: BPF prog-id=39 op=UNLOAD Jan 24 00:34:21.902000 audit: BPF prog-id=40 op=UNLOAD Jan 24 00:34:21.902000 audit: BPF prog-id=50 op=LOAD Jan 24 00:34:21.902000 audit: BPF prog-id=28 op=UNLOAD Jan 24 00:34:21.903000 audit: BPF prog-id=51 op=LOAD Jan 24 00:34:21.903000 audit: BPF prog-id=35 op=UNLOAD Jan 24 00:34:21.903000 audit: BPF prog-id=52 op=LOAD Jan 24 00:34:21.903000 audit: BPF prog-id=53 op=LOAD Jan 24 00:34:21.903000 audit: BPF prog-id=36 op=UNLOAD Jan 24 00:34:21.903000 audit: BPF prog-id=37 op=UNLOAD Jan 24 00:34:21.906070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:34:21.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.913065 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:34:21.915418 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:34:21.919019 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:34:21.925023 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:34:21.926564 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:34:21.931000 audit: BPF prog-id=8 op=UNLOAD Jan 24 00:34:21.931000 audit: BPF prog-id=7 op=UNLOAD Jan 24 00:34:21.932083 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:34:21.932000 audit: BPF prog-id=54 op=LOAD Jan 24 00:34:21.932000 audit: BPF prog-id=55 op=LOAD Jan 24 00:34:21.935142 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:34:21.943800 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:34:21.945754 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:34:21.947321 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:34:21.955915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:21.956063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:34:21.957408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:34:21.960629 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:34:21.975837 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:34:21.976707 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:34:21.976935 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:34:21.977024 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:34:21.977109 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:21.980589 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:21.980721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:34:21.980859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:34:21.981710 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:34:21.981802 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:34:21.981877 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:21.987282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:21.987482 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:34:21.989107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:34:21.989000 audit[1533]: SYSTEM_BOOT pid=1533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 24 00:34:21.995725 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 24 00:34:21.996940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:34:21.997089 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:34:21.997172 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:34:21.997319 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:34:21.997793 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:34:22.007277 systemd[1]: Finished ensure-sysext.service. Jan 24 00:34:22.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.008745 systemd-udevd[1529]: Using default interface naming scheme 'v257'. Jan 24 00:34:22.017943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:34:22.018114 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:34:22.020827 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 24 00:34:22.020870 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 24 00:34:22.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.021685 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:34:22.023302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:34:22.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.024432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:34:22.025352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:34:22.026943 kernel: PTP clock support registered Jan 24 00:34:22.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.026042 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:34:22.027957 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:34:22.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.035241 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:34:22.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.038779 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:34:22.039790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:34:22.041843 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 24 00:34:22.042064 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 24 00:34:22.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.044627 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:34:22.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:22.056000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 24 00:34:22.056000 audit[1565]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff649a6c00 a2=420 a3=0 items=0 ppid=1525 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:22.056000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:34:22.058128 augenrules[1565]: No rules Jan 24 00:34:22.058388 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:34:22.059056 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:34:22.063447 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:34:22.066593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:34:22.103481 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:34:22.104874 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:34:22.148023 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:34:22.164550 systemd-networkd[1573]: lo: Link UP Jan 24 00:34:22.164559 systemd-networkd[1573]: lo: Gained carrier Jan 24 00:34:22.165246 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:34:22.166328 systemd[1]: Reached target network.target - Network. Jan 24 00:34:22.170042 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 24 00:34:22.172763 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:34:22.230824 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 24 00:34:22.267935 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:34:22.270605 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:34:22.273443 systemd-networkd[1573]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:34:22.273451 systemd-networkd[1573]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:34:22.274090 systemd-networkd[1573]: eth0: Link UP Jan 24 00:34:22.274196 systemd-networkd[1573]: eth0: Gained carrier Jan 24 00:34:22.274217 systemd-networkd[1573]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:34:22.276819 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:34:22.280952 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 24 00:34:22.290969 systemd-networkd[1573]: eth0: DHCPv4 address 10.0.1.115/25, gateway 10.0.1.1 acquired from 10.0.1.1 Jan 24 00:34:22.303929 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:34:22.309744 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:34:22.407928 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 24 00:34:22.415610 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:34:22.415883 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:34:22.494935 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 24 00:34:22.497968 kernel: Console: switching to colour dummy device 80x25 Jan 24 00:34:22.501131 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 24 00:34:22.504013 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 24 00:34:22.504045 kernel: [drm] features: -context_init Jan 24 00:34:22.507927 kernel: [drm] number of scanouts: 1 Jan 24 00:34:22.507970 kernel: [drm] number of cap sets: 0 Jan 24 00:34:22.510921 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 24 00:34:22.516338 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 24 00:34:22.516392 kernel: Console: switching to colour frame buffer device 160x50 Jan 24 00:34:22.519952 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 24 00:34:22.568202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:22.577741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:34:22.577953 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:22.581169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:22.626327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:34:22.626588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:22.635091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:34:22.696610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:34:22.698854 ldconfig[1527]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:34:22.702655 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:34:22.705602 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:34:22.720496 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:34:22.722209 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:34:22.722396 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:34:22.722478 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:34:22.722550 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 24 00:34:22.722740 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:34:22.722842 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:34:22.722943 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 24 00:34:22.723101 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 24 00:34:22.723160 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:34:22.723213 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:34:22.723236 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:34:22.723281 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:34:22.724207 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:34:22.727708 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:34:22.730988 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 24 00:34:22.732255 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 24 00:34:22.733207 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 24 00:34:22.735826 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:34:22.737583 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 24 00:34:22.738607 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:34:22.739798 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:34:22.740161 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:34:22.740686 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:34:22.740714 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:34:22.743032 systemd[1]: Starting chronyd.service - NTP client/server... Jan 24 00:34:22.745389 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:34:22.749122 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:34:22.758873 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:34:22.760280 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:34:22.764019 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:34:22.765925 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:22.769875 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:34:22.771684 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:34:22.774968 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 24 00:34:22.781106 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:34:22.786188 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:34:22.797299 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:34:22.800142 google_oslogin_nss_cache[1648]: oslogin_cache_refresh[1648]: Refreshing passwd entry cache Jan 24 00:34:22.800165 oslogin_cache_refresh[1648]: Refreshing passwd entry cache Jan 24 00:34:22.804051 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:34:22.805477 jq[1644]: false Jan 24 00:34:22.809530 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:34:22.811936 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:34:22.813005 google_oslogin_nss_cache[1648]: oslogin_cache_refresh[1648]: Failure getting users, quitting Jan 24 00:34:22.813001 oslogin_cache_refresh[1648]: Failure getting users, quitting Jan 24 00:34:22.813093 google_oslogin_nss_cache[1648]: oslogin_cache_refresh[1648]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:34:22.813093 google_oslogin_nss_cache[1648]: oslogin_cache_refresh[1648]: Refreshing group entry cache Jan 24 00:34:22.813021 oslogin_cache_refresh[1648]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:34:22.813066 oslogin_cache_refresh[1648]: Refreshing group entry cache Jan 24 00:34:22.814417 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:34:22.818161 google_oslogin_nss_cache[1648]: oslogin_cache_refresh[1648]: Failure getting groups, quitting Jan 24 00:34:22.818157 oslogin_cache_refresh[1648]: Failure getting groups, quitting Jan 24 00:34:22.818271 google_oslogin_nss_cache[1648]: oslogin_cache_refresh[1648]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:34:22.818170 oslogin_cache_refresh[1648]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:34:22.818803 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:34:22.824361 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:34:22.827664 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:34:22.830754 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:34:22.831210 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:34:22.831467 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 24 00:34:22.831644 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 24 00:34:22.843797 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:34:22.845127 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:34:22.854979 extend-filesystems[1647]: Found /dev/vda6 Jan 24 00:34:22.872167 extend-filesystems[1647]: Found /dev/vda9 Jan 24 00:34:22.876519 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:34:22.876859 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:34:22.880297 jq[1660]: true Jan 24 00:34:22.886224 tar[1664]: linux-amd64/LICENSE Jan 24 00:34:22.886224 tar[1664]: linux-amd64/helm Jan 24 00:34:22.886465 extend-filesystems[1647]: Checking size of /dev/vda9 Jan 24 00:34:22.922582 update_engine[1657]: I20260124 00:34:22.922497 1657 main.cc:92] Flatcar Update Engine starting Jan 24 00:34:22.934493 jq[1686]: true Jan 24 00:34:22.934710 extend-filesystems[1647]: Resized partition /dev/vda9 Jan 24 00:34:22.939372 extend-filesystems[1694]: resize2fs 1.47.3 (8-Jul-2025) Jan 24 00:34:22.945960 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Jan 24 00:34:22.943082 dbus-daemon[1642]: [system] SELinux support is enabled Jan 24 00:34:22.944405 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:34:22.948579 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:34:22.948607 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:34:22.951124 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:34:22.951139 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:34:22.967083 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:34:22.982439 update_engine[1657]: I20260124 00:34:22.971673 1657 update_check_scheduler.cc:74] Next update check in 6m39s Jan 24 00:34:22.981426 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:34:22.974040 chronyd[1639]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 24 00:34:22.974776 chronyd[1639]: Loaded seccomp filter (level 2) Jan 24 00:34:22.983595 systemd[1]: Started chronyd.service - NTP client/server. Jan 24 00:34:23.080193 systemd-logind[1654]: New seat seat0. Jan 24 00:34:23.133779 systemd-logind[1654]: Watching system buttons on /dev/input/event3 (Power Button) Jan 24 00:34:23.133802 systemd-logind[1654]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:34:23.134114 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:34:23.148524 locksmithd[1697]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:34:23.179928 bash[1718]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:34:23.180982 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:34:23.188686 systemd[1]: Starting sshkeys.service... Jan 24 00:34:23.225255 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:34:23.227946 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:34:23.231742 sshd_keygen[1676]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:34:23.234282 containerd[1677]: time="2026-01-24T00:34:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 24 00:34:23.235205 containerd[1677]: time="2026-01-24T00:34:23.235180519Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 24 00:34:23.248652 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:23.260088 containerd[1677]: time="2026-01-24T00:34:23.260048365Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.345µs" Jan 24 00:34:23.260180 containerd[1677]: time="2026-01-24T00:34:23.260168856Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 24 00:34:23.260255 containerd[1677]: time="2026-01-24T00:34:23.260245762Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 24 00:34:23.260317 containerd[1677]: time="2026-01-24T00:34:23.260308755Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 24 00:34:23.260491 containerd[1677]: time="2026-01-24T00:34:23.260478996Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 24 00:34:23.260534 containerd[1677]: time="2026-01-24T00:34:23.260526890Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:34:23.260608 containerd[1677]: time="2026-01-24T00:34:23.260597795Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:34:23.260641 containerd[1677]: time="2026-01-24T00:34:23.260634131Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:34:23.260871 containerd[1677]: time="2026-01-24T00:34:23.260855622Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:34:23.260955 containerd[1677]: time="2026-01-24T00:34:23.260944671Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:34:23.260993 containerd[1677]: time="2026-01-24T00:34:23.260984968Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:34:23.261029 containerd[1677]: time="2026-01-24T00:34:23.261021840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:34:23.261207 containerd[1677]: time="2026-01-24T00:34:23.261195855Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:34:23.261398 containerd[1677]: time="2026-01-24T00:34:23.261238723Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 24 00:34:23.261398 containerd[1677]: time="2026-01-24T00:34:23.261304022Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 24 00:34:23.261527 containerd[1677]: time="2026-01-24T00:34:23.261516300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:34:23.261579 containerd[1677]: time="2026-01-24T00:34:23.261570137Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:34:23.261637 containerd[1677]: time="2026-01-24T00:34:23.261614310Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 24 00:34:23.261705 containerd[1677]: time="2026-01-24T00:34:23.261694767Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 24 00:34:23.262095 containerd[1677]: time="2026-01-24T00:34:23.262068467Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 24 00:34:23.262187 containerd[1677]: time="2026-01-24T00:34:23.262176678Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:34:23.271262 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:34:23.275317 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:34:23.296309 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:34:23.296556 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:34:23.299413 containerd[1677]: time="2026-01-24T00:34:23.299184884Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 24 00:34:23.299413 containerd[1677]: time="2026-01-24T00:34:23.299248453Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:34:23.299724 containerd[1677]: time="2026-01-24T00:34:23.299322816Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:34:23.299724 containerd[1677]: time="2026-01-24T00:34:23.299689172Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 24 00:34:23.299724 containerd[1677]: time="2026-01-24T00:34:23.299706212Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 24 00:34:23.299888 containerd[1677]: time="2026-01-24T00:34:23.299846553Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 24 00:34:23.299888 containerd[1677]: time="2026-01-24T00:34:23.299862316Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 24 00:34:23.299888 containerd[1677]: time="2026-01-24T00:34:23.299871192Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 24 00:34:23.300025 containerd[1677]: time="2026-01-24T00:34:23.299970212Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 24 00:34:23.300136 containerd[1677]: time="2026-01-24T00:34:23.299999246Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 24 00:34:23.300136 containerd[1677]: time="2026-01-24T00:34:23.300060299Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 24 00:34:23.300136 containerd[1677]: time="2026-01-24T00:34:23.300072731Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 24 00:34:23.300136 containerd[1677]: time="2026-01-24T00:34:23.300084101Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 24 00:34:23.300136 containerd[1677]: time="2026-01-24T00:34:23.300095393Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 24 00:34:23.300368 containerd[1677]: time="2026-01-24T00:34:23.300356814Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 24 00:34:23.300420 containerd[1677]: time="2026-01-24T00:34:23.300412591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 24 00:34:23.300491 containerd[1677]: time="2026-01-24T00:34:23.300460329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 24 00:34:23.300600 containerd[1677]: time="2026-01-24T00:34:23.300471724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 24 00:34:23.300600 containerd[1677]: time="2026-01-24T00:34:23.300536405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 24 00:34:23.300600 containerd[1677]: time="2026-01-24T00:34:23.300547365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 24 00:34:23.300600 containerd[1677]: time="2026-01-24T00:34:23.300566382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 24 00:34:23.300600 containerd[1677]: time="2026-01-24T00:34:23.300579181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 24 00:34:23.300747 containerd[1677]: time="2026-01-24T00:34:23.300589981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 24 00:34:23.300747 containerd[1677]: time="2026-01-24T00:34:23.300723435Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 24 00:34:23.300747 containerd[1677]: time="2026-01-24T00:34:23.300733638Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 24 00:34:23.300957 containerd[1677]: time="2026-01-24T00:34:23.300820059Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 24 00:34:23.301057 containerd[1677]: time="2026-01-24T00:34:23.301020917Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 24 00:34:23.301057 containerd[1677]: time="2026-01-24T00:34:23.301035576Z" level=info msg="Start snapshots syncer" Jan 24 00:34:23.301149 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:34:23.301548 containerd[1677]: time="2026-01-24T00:34:23.301381041Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 24 00:34:23.305680 containerd[1677]: time="2026-01-24T00:34:23.304179417Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 24 00:34:23.305680 containerd[1677]: time="2026-01-24T00:34:23.304246168Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304343310Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304493133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304511745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304528830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304538975Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304560689Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304587187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304599247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304614194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304640927Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304682345Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304745988Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:34:23.305830 containerd[1677]: time="2026-01-24T00:34:23.304755128Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:34:23.306220 containerd[1677]: time="2026-01-24T00:34:23.304764150Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:34:23.306220 containerd[1677]: time="2026-01-24T00:34:23.304780761Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 24 00:34:23.306220 containerd[1677]: time="2026-01-24T00:34:23.304790380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 24 00:34:23.306220 containerd[1677]: time="2026-01-24T00:34:23.304799965Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 24 00:34:23.306220 containerd[1677]: time="2026-01-24T00:34:23.304814674Z" level=info msg="runtime interface created" Jan 24 00:34:23.306220 containerd[1677]: time="2026-01-24T00:34:23.304819279Z" level=info msg="created NRI interface" Jan 24 00:34:23.306220 containerd[1677]: time="2026-01-24T00:34:23.304826418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 24 00:34:23.306220 containerd[1677]: time="2026-01-24T00:34:23.304836237Z" level=info msg="Connect containerd service" Jan 24 00:34:23.306220 containerd[1677]: time="2026-01-24T00:34:23.304862192Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:34:23.306220 containerd[1677]: time="2026-01-24T00:34:23.305590616Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:34:23.347216 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:34:23.353276 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:34:23.356268 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:34:23.356839 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:34:23.361932 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Jan 24 00:34:23.383002 extend-filesystems[1694]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:34:23.383002 extend-filesystems[1694]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 24 00:34:23.383002 extend-filesystems[1694]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Jan 24 00:34:23.387922 extend-filesystems[1647]: Resized filesystem in /dev/vda9 Jan 24 00:34:23.387593 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:34:23.389078 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:34:23.416932 containerd[1677]: time="2026-01-24T00:34:23.416053710Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:34:23.417815 containerd[1677]: time="2026-01-24T00:34:23.417792123Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:34:23.417850 containerd[1677]: time="2026-01-24T00:34:23.417725582Z" level=info msg="Start subscribing containerd event" Jan 24 00:34:23.417890 containerd[1677]: time="2026-01-24T00:34:23.417867473Z" level=info msg="Start recovering state" Jan 24 00:34:23.419258 containerd[1677]: time="2026-01-24T00:34:23.419238155Z" level=info msg="Start event monitor" Jan 24 00:34:23.419289 containerd[1677]: time="2026-01-24T00:34:23.419262627Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:34:23.419289 containerd[1677]: time="2026-01-24T00:34:23.419270733Z" level=info msg="Start streaming server" Jan 24 00:34:23.419289 containerd[1677]: time="2026-01-24T00:34:23.419281398Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 24 00:34:23.419347 containerd[1677]: time="2026-01-24T00:34:23.419289126Z" level=info msg="runtime interface starting up..." Jan 24 00:34:23.419347 containerd[1677]: time="2026-01-24T00:34:23.419294729Z" level=info msg="starting plugins..." Jan 24 00:34:23.419347 containerd[1677]: time="2026-01-24T00:34:23.419309163Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 24 00:34:23.419430 containerd[1677]: time="2026-01-24T00:34:23.419413100Z" level=info msg="containerd successfully booted in 0.186146s" Jan 24 00:34:23.419612 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:34:23.554706 tar[1664]: linux-amd64/README.md Jan 24 00:34:23.582369 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:34:23.801932 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:24.172144 systemd-networkd[1573]: eth0: Gained IPv6LL Jan 24 00:34:24.177935 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:34:24.180712 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:34:24.184127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:34:24.188061 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:34:24.218683 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:34:24.261968 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:25.206358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:34:25.212892 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:34:25.288933 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:34:25.292491 systemd[1]: Started sshd@0-10.0.1.115:22-4.153.228.146:59790.service - OpenSSH per-connection server daemon (4.153.228.146:59790). Jan 24 00:34:25.809962 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:25.845451 sshd[1783]: Accepted publickey for core from 4.153.228.146 port 59790 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:34:25.847817 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:25.856893 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:34:25.859215 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:34:25.873091 systemd-logind[1654]: New session 1 of user core. Jan 24 00:34:25.882226 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:34:25.886092 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:34:25.903370 (systemd)[1796]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:25.907059 systemd-logind[1654]: New session 2 of user core. Jan 24 00:34:25.911335 kubelet[1781]: E0124 00:34:25.911291 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:34:25.913414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:34:25.913538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:34:25.913848 systemd[1]: kubelet.service: Consumed 999ms CPU time, 265M memory peak. Jan 24 00:34:26.025024 systemd[1796]: Queued start job for default target default.target. Jan 24 00:34:26.046979 systemd[1796]: Created slice app.slice - User Application Slice. Jan 24 00:34:26.047182 systemd[1796]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 24 00:34:26.047239 systemd[1796]: Reached target paths.target - Paths. Jan 24 00:34:26.047330 systemd[1796]: Reached target timers.target - Timers. Jan 24 00:34:26.048697 systemd[1796]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:34:26.051052 systemd[1796]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 24 00:34:26.064183 systemd[1796]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 24 00:34:26.065289 systemd[1796]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:34:26.065390 systemd[1796]: Reached target sockets.target - Sockets. Jan 24 00:34:26.065430 systemd[1796]: Reached target basic.target - Basic System. Jan 24 00:34:26.065466 systemd[1796]: Reached target default.target - Main User Target. Jan 24 00:34:26.065496 systemd[1796]: Startup finished in 148ms. Jan 24 00:34:26.065644 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:34:26.071212 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:34:26.273948 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:26.388401 systemd[1]: Started sshd@1-10.0.1.115:22-4.153.228.146:59794.service - OpenSSH per-connection server daemon (4.153.228.146:59794). Jan 24 00:34:26.971527 sshd[1812]: Accepted publickey for core from 4.153.228.146 port 59794 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:34:26.972905 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:26.978015 systemd-logind[1654]: New session 3 of user core. Jan 24 00:34:26.986177 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:34:27.273410 sshd[1816]: Connection closed by 4.153.228.146 port 59794 Jan 24 00:34:27.273976 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:27.278836 systemd[1]: sshd@1-10.0.1.115:22-4.153.228.146:59794.service: Deactivated successfully. Jan 24 00:34:27.282017 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:34:27.283087 systemd-logind[1654]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:34:27.286121 systemd-logind[1654]: Removed session 3. Jan 24 00:34:27.389861 systemd[1]: Started sshd@2-10.0.1.115:22-4.153.228.146:59798.service - OpenSSH per-connection server daemon (4.153.228.146:59798). Jan 24 00:34:27.939904 sshd[1822]: Accepted publickey for core from 4.153.228.146 port 59798 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:34:27.943206 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:27.953460 systemd-logind[1654]: New session 4 of user core. Jan 24 00:34:27.961292 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:34:28.233157 sshd[1826]: Connection closed by 4.153.228.146 port 59798 Jan 24 00:34:28.234131 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:28.242156 systemd[1]: sshd@2-10.0.1.115:22-4.153.228.146:59798.service: Deactivated successfully. Jan 24 00:34:28.244538 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:34:28.246347 systemd-logind[1654]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:34:28.248548 systemd-logind[1654]: Removed session 4. Jan 24 00:34:29.822950 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:29.833291 coreos-metadata[1641]: Jan 24 00:34:29.833 WARN failed to locate config-drive, using the metadata service API instead Jan 24 00:34:29.849748 coreos-metadata[1641]: Jan 24 00:34:29.849 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 24 00:34:30.285950 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 24 00:34:30.296227 coreos-metadata[1725]: Jan 24 00:34:30.296 WARN failed to locate config-drive, using the metadata service API instead Jan 24 00:34:30.308426 coreos-metadata[1725]: Jan 24 00:34:30.308 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 24 00:34:31.163133 coreos-metadata[1641]: Jan 24 00:34:31.163 INFO Fetch successful Jan 24 00:34:31.163133 coreos-metadata[1641]: Jan 24 00:34:31.163 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 24 00:34:31.735225 coreos-metadata[1725]: Jan 24 00:34:31.735 INFO Fetch successful Jan 24 00:34:31.735225 coreos-metadata[1725]: Jan 24 00:34:31.735 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 24 00:34:32.336644 coreos-metadata[1641]: Jan 24 00:34:32.336 INFO Fetch successful Jan 24 00:34:32.336644 coreos-metadata[1641]: Jan 24 00:34:32.336 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 24 00:34:32.904166 coreos-metadata[1725]: Jan 24 00:34:32.903 INFO Fetch successful Jan 24 00:34:32.906607 unknown[1725]: wrote ssh authorized keys file for user: core Jan 24 00:34:32.912964 coreos-metadata[1641]: Jan 24 00:34:32.912 INFO Fetch successful Jan 24 00:34:32.912964 coreos-metadata[1641]: Jan 24 00:34:32.912 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 24 00:34:32.942117 update-ssh-keys[1840]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:34:32.944687 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:34:32.947626 systemd[1]: Finished sshkeys.service. Jan 24 00:34:33.526382 coreos-metadata[1641]: Jan 24 00:34:33.526 INFO Fetch successful Jan 24 00:34:33.526382 coreos-metadata[1641]: Jan 24 00:34:33.526 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 24 00:34:34.107289 coreos-metadata[1641]: Jan 24 00:34:34.107 INFO Fetch successful Jan 24 00:34:34.107289 coreos-metadata[1641]: Jan 24 00:34:34.107 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 24 00:34:34.740615 coreos-metadata[1641]: Jan 24 00:34:34.739 INFO Fetch successful Jan 24 00:34:34.776408 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:34:34.777843 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:34:34.778793 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:34:34.779817 systemd[1]: Startup finished in 3.495s (kernel) + 13.437s (initrd) + 14.691s (userspace) = 31.624s. Jan 24 00:34:36.165952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:34:36.169961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:34:36.360269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:34:36.370218 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:34:36.412901 kubelet[1856]: E0124 00:34:36.412865 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:34:36.415719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:34:36.415841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:34:36.416157 systemd[1]: kubelet.service: Consumed 197ms CPU time, 109M memory peak. Jan 24 00:34:38.347479 systemd[1]: Started sshd@3-10.0.1.115:22-4.153.228.146:59398.service - OpenSSH per-connection server daemon (4.153.228.146:59398). Jan 24 00:34:38.900365 sshd[1864]: Accepted publickey for core from 4.153.228.146 port 59398 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:34:38.901537 sshd-session[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:38.907574 systemd-logind[1654]: New session 5 of user core. Jan 24 00:34:38.916266 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:34:39.201524 sshd[1868]: Connection closed by 4.153.228.146 port 59398 Jan 24 00:34:39.202114 sshd-session[1864]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:39.206561 systemd[1]: sshd@3-10.0.1.115:22-4.153.228.146:59398.service: Deactivated successfully. Jan 24 00:34:39.209317 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:34:39.211511 systemd-logind[1654]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:34:39.213528 systemd-logind[1654]: Removed session 5. Jan 24 00:34:39.312115 systemd[1]: Started sshd@4-10.0.1.115:22-4.153.228.146:59404.service - OpenSSH per-connection server daemon (4.153.228.146:59404). Jan 24 00:34:39.847975 sshd[1874]: Accepted publickey for core from 4.153.228.146 port 59404 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:34:39.849338 sshd-session[1874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:39.855039 systemd-logind[1654]: New session 6 of user core. Jan 24 00:34:39.864446 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:34:40.134657 sshd[1878]: Connection closed by 4.153.228.146 port 59404 Jan 24 00:34:40.135399 sshd-session[1874]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:40.139392 systemd[1]: sshd@4-10.0.1.115:22-4.153.228.146:59404.service: Deactivated successfully. Jan 24 00:34:40.141310 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:34:40.142286 systemd-logind[1654]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:34:40.144041 systemd-logind[1654]: Removed session 6. Jan 24 00:34:40.243008 systemd[1]: Started sshd@5-10.0.1.115:22-4.153.228.146:59408.service - OpenSSH per-connection server daemon (4.153.228.146:59408). Jan 24 00:34:40.776980 sshd[1884]: Accepted publickey for core from 4.153.228.146 port 59408 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:34:40.779057 sshd-session[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:40.784644 systemd-logind[1654]: New session 7 of user core. Jan 24 00:34:40.794150 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:34:41.078177 sshd[1888]: Connection closed by 4.153.228.146 port 59408 Jan 24 00:34:41.076506 sshd-session[1884]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:41.082224 systemd-logind[1654]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:34:41.082791 systemd[1]: sshd@5-10.0.1.115:22-4.153.228.146:59408.service: Deactivated successfully. Jan 24 00:34:41.085403 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:34:41.087621 systemd-logind[1654]: Removed session 7. Jan 24 00:34:41.195517 systemd[1]: Started sshd@6-10.0.1.115:22-4.153.228.146:59410.service - OpenSSH per-connection server daemon (4.153.228.146:59410). Jan 24 00:34:41.726544 sshd[1894]: Accepted publickey for core from 4.153.228.146 port 59410 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:34:41.727113 sshd-session[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:41.732242 systemd-logind[1654]: New session 8 of user core. Jan 24 00:34:41.745106 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:34:41.934299 sudo[1899]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:34:41.934547 sudo[1899]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:34:41.945651 sudo[1899]: pam_unix(sudo:session): session closed for user root Jan 24 00:34:42.040934 sshd[1898]: Connection closed by 4.153.228.146 port 59410 Jan 24 00:34:42.041606 sshd-session[1894]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:42.049655 systemd[1]: sshd@6-10.0.1.115:22-4.153.228.146:59410.service: Deactivated successfully. Jan 24 00:34:42.053558 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:34:42.056394 systemd-logind[1654]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:34:42.057492 systemd-logind[1654]: Removed session 8. Jan 24 00:34:42.156341 systemd[1]: Started sshd@7-10.0.1.115:22-4.153.228.146:59420.service - OpenSSH per-connection server daemon (4.153.228.146:59420). Jan 24 00:34:42.704735 sshd[1906]: Accepted publickey for core from 4.153.228.146 port 59420 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:34:42.706273 sshd-session[1906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:42.711969 systemd-logind[1654]: New session 9 of user core. Jan 24 00:34:42.718082 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:34:42.917842 sudo[1912]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:34:42.918607 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:34:42.923085 sudo[1912]: pam_unix(sudo:session): session closed for user root Jan 24 00:34:42.935115 sudo[1911]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 24 00:34:42.935537 sudo[1911]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:34:42.954155 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:34:43.015000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:34:43.016958 kernel: kauditd_printk_skb: 179 callbacks suppressed Jan 24 00:34:43.017026 kernel: audit: type=1305 audit(1769214883.015:225): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:34:43.019103 augenrules[1936]: No rules Jan 24 00:34:43.015000 audit[1936]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffecb99a610 a2=420 a3=0 items=0 ppid=1917 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:43.024147 kernel: audit: type=1300 audit(1769214883.015:225): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffecb99a610 a2=420 a3=0 items=0 ppid=1917 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:43.027305 kernel: audit: type=1327 audit(1769214883.015:225): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:34:43.015000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:34:43.024688 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:34:43.025086 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:34:43.027992 sudo[1911]: pam_unix(sudo:session): session closed for user root Jan 24 00:34:43.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.037071 kernel: audit: type=1130 audit(1769214883.023:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.037209 kernel: audit: type=1131 audit(1769214883.023:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.037253 kernel: audit: type=1106 audit(1769214883.027:228): pid=1911 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.027000 audit[1911]: USER_END pid=1911 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.027000 audit[1911]: CRED_DISP pid=1911 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.045299 kernel: audit: type=1104 audit(1769214883.027:229): pid=1911 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.128399 sshd[1910]: Connection closed by 4.153.228.146 port 59420 Jan 24 00:34:43.128225 sshd-session[1906]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:43.129000 audit[1906]: USER_END pid=1906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:34:43.135947 kernel: audit: type=1106 audit(1769214883.129:230): pid=1906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:34:43.129000 audit[1906]: CRED_DISP pid=1906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:34:43.138147 systemd[1]: sshd@7-10.0.1.115:22-4.153.228.146:59420.service: Deactivated successfully. Jan 24 00:34:43.139942 kernel: audit: type=1104 audit(1769214883.129:231): pid=1906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:34:43.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.1.115:22-4.153.228.146:59420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.140589 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:34:43.142946 kernel: audit: type=1131 audit(1769214883.137:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.1.115:22-4.153.228.146:59420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.143207 systemd-logind[1654]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:34:43.144053 systemd-logind[1654]: Removed session 9. Jan 24 00:34:43.237640 systemd[1]: Started sshd@8-10.0.1.115:22-4.153.228.146:59436.service - OpenSSH per-connection server daemon (4.153.228.146:59436). Jan 24 00:34:43.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.1.115:22-4.153.228.146:59436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.763000 audit[1945]: USER_ACCT pid=1945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:34:43.764599 sshd[1945]: Accepted publickey for core from 4.153.228.146 port 59436 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:34:43.764000 audit[1945]: CRED_ACQ pid=1945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:34:43.764000 audit[1945]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc6fdc790 a2=3 a3=0 items=0 ppid=1 pid=1945 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:43.764000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:34:43.765582 sshd-session[1945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:43.769385 systemd-logind[1654]: New session 10 of user core. Jan 24 00:34:43.776059 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:34:43.777000 audit[1945]: USER_START pid=1945 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:34:43.779000 audit[1949]: CRED_ACQ pid=1949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:34:43.975000 audit[1950]: USER_ACCT pid=1950 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.975000 audit[1950]: CRED_REFR pid=1950 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:34:43.976501 sudo[1950]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:34:43.978716 sudo[1950]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:34:43.979000 audit[1950]: USER_START pid=1950 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:34:44.429658 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:34:44.449181 (dockerd)[1969]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:34:44.773901 dockerd[1969]: time="2026-01-24T00:34:44.773429333Z" level=info msg="Starting up" Jan 24 00:34:44.774478 dockerd[1969]: time="2026-01-24T00:34:44.774461310Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 24 00:34:44.790429 dockerd[1969]: time="2026-01-24T00:34:44.790382993Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 24 00:34:44.808593 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1336131562-merged.mount: Deactivated successfully. Jan 24 00:34:44.828865 systemd[1]: var-lib-docker-metacopy\x2dcheck1141488623-merged.mount: Deactivated successfully. Jan 24 00:34:44.856222 dockerd[1969]: time="2026-01-24T00:34:44.856001826Z" level=info msg="Loading containers: start." Jan 24 00:34:44.871931 kernel: Initializing XFRM netlink socket Jan 24 00:34:44.928000 audit[2018]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.928000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffff1005070 a2=0 a3=0 items=0 ppid=1969 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.928000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:34:44.930000 audit[2020]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.930000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffec68d0c50 a2=0 a3=0 items=0 ppid=1969 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.930000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:34:44.932000 audit[2022]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.932000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd5f508c0 a2=0 a3=0 items=0 ppid=1969 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.932000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:34:44.934000 audit[2024]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.934000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd5052310 a2=0 a3=0 items=0 ppid=1969 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:34:44.935000 audit[2026]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.935000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe7e80d030 a2=0 a3=0 items=0 ppid=1969 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.935000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:34:44.937000 audit[2028]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.937000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff78539960 a2=0 a3=0 items=0 ppid=1969 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:34:44.939000 audit[2030]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.939000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffa40de370 a2=0 a3=0 items=0 ppid=1969 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.939000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:34:44.941000 audit[2032]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.941000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd9284b500 a2=0 a3=0 items=0 ppid=1969 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.941000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:34:44.982000 audit[2035]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.982000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff40f83b20 a2=0 a3=0 items=0 ppid=1969 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.982000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 24 00:34:44.984000 audit[2037]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.984000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffed0fc87a0 a2=0 a3=0 items=0 ppid=1969 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.984000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:34:44.986000 audit[2039]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.986000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffeb60da6c0 a2=0 a3=0 items=0 ppid=1969 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.986000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:34:44.988000 audit[2041]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.988000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffee4ce6e80 a2=0 a3=0 items=0 ppid=1969 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.988000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:34:44.989000 audit[2043]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:44.989000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff5192bd00 a2=0 a3=0 items=0 ppid=1969 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:44.989000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:34:45.024000 audit[2073]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.024000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffffcd50f10 a2=0 a3=0 items=0 ppid=1969 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.024000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:34:45.026000 audit[2075]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.026000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd7c073960 a2=0 a3=0 items=0 ppid=1969 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.026000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:34:45.027000 audit[2077]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.027000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe40c2dec0 a2=0 a3=0 items=0 ppid=1969 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.027000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:34:45.029000 audit[2079]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.029000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8740eb10 a2=0 a3=0 items=0 ppid=1969 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.029000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:34:45.031000 audit[2081]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.031000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd6ef57430 a2=0 a3=0 items=0 ppid=1969 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:34:45.032000 audit[2083]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.032000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd3f05c760 a2=0 a3=0 items=0 ppid=1969 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.032000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:34:45.034000 audit[2085]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.034000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffbe2d0830 a2=0 a3=0 items=0 ppid=1969 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.034000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:34:45.036000 audit[2087]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.036000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffbffa4620 a2=0 a3=0 items=0 ppid=1969 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:34:45.039000 audit[2089]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.039000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffdde3c30a0 a2=0 a3=0 items=0 ppid=1969 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.039000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 24 00:34:45.040000 audit[2091]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.040000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe05e88050 a2=0 a3=0 items=0 ppid=1969 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.040000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:34:45.042000 audit[2093]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.042000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcdc3ed940 a2=0 a3=0 items=0 ppid=1969 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.042000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:34:45.046000 audit[2095]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.046000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe4aa75a90 a2=0 a3=0 items=0 ppid=1969 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.046000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:34:45.048000 audit[2097]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.048000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff8360e320 a2=0 a3=0 items=0 ppid=1969 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:34:45.052000 audit[2102]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:45.052000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd50b3b9a0 a2=0 a3=0 items=0 ppid=1969 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.052000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:34:45.054000 audit[2104]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:45.054000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff3dacab20 a2=0 a3=0 items=0 ppid=1969 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:34:45.056000 audit[2106]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:45.056000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcb7d5f980 a2=0 a3=0 items=0 ppid=1969 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.056000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:34:45.058000 audit[2108]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.058000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd069c2770 a2=0 a3=0 items=0 ppid=1969 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.058000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:34:45.060000 audit[2110]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.060000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc757c5760 a2=0 a3=0 items=0 ppid=1969 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.060000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:34:45.062000 audit[2112]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:34:45.062000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffd5aec220 a2=0 a3=0 items=0 ppid=1969 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.062000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:34:45.094000 audit[2117]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:45.094000 audit[2117]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff01634eb0 a2=0 a3=0 items=0 ppid=1969 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 24 00:34:45.097000 audit[2119]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:45.097000 audit[2119]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdcb65e690 a2=0 a3=0 items=0 ppid=1969 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.097000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 24 00:34:45.104000 audit[2127]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:45.104000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc65b965a0 a2=0 a3=0 items=0 ppid=1969 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.104000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 24 00:34:45.118000 audit[2133]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:45.118000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd201433e0 a2=0 a3=0 items=0 ppid=1969 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.118000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 24 00:34:45.121000 audit[2135]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:45.121000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe72447030 a2=0 a3=0 items=0 ppid=1969 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.121000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 24 00:34:45.123000 audit[2137]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:45.123000 audit[2137]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdf92827e0 a2=0 a3=0 items=0 ppid=1969 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.123000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 24 00:34:45.124000 audit[2139]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:45.124000 audit[2139]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd86889780 a2=0 a3=0 items=0 ppid=1969 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:34:45.126000 audit[2141]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:34:45.126000 audit[2141]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdce516d70 a2=0 a3=0 items=0 ppid=1969 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:34:45.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 24 00:34:45.128380 systemd-networkd[1573]: docker0: Link UP Jan 24 00:34:45.133840 dockerd[1969]: time="2026-01-24T00:34:45.133473096Z" level=info msg="Loading containers: done." Jan 24 00:34:45.155653 dockerd[1969]: time="2026-01-24T00:34:45.155618090Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:34:45.155768 dockerd[1969]: time="2026-01-24T00:34:45.155681709Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 24 00:34:45.155768 dockerd[1969]: time="2026-01-24T00:34:45.155741300Z" level=info msg="Initializing buildkit" Jan 24 00:34:45.178583 dockerd[1969]: time="2026-01-24T00:34:45.178550870Z" level=info msg="Completed buildkit initialization" Jan 24 00:34:45.186335 dockerd[1969]: time="2026-01-24T00:34:45.186281875Z" level=info msg="Daemon has completed initialization" Jan 24 00:34:45.186505 dockerd[1969]: time="2026-01-24T00:34:45.186359375Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:34:45.186581 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:34:45.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:46.491968 containerd[1677]: time="2026-01-24T00:34:46.491874946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:34:46.667093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:34:46.670239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:34:46.770981 chronyd[1639]: Selected source PHC0 Jan 24 00:34:47.924457 systemd-resolved[1350]: Clock change detected. Flushing caches. Jan 24 00:34:46.771003 chronyd[1639]: System clock wrong by 1.153298 seconds Jan 24 00:34:47.924318 chronyd[1639]: System clock was stepped by 1.153298 seconds Jan 24 00:34:47.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:47.953592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:34:47.961460 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:34:48.000546 kubelet[2189]: E0124 00:34:48.000508 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:34:48.002529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:34:48.002654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:34:48.002972 systemd[1]: kubelet.service: Consumed 133ms CPU time, 110.9M memory peak. Jan 24 00:34:48.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:34:48.277956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552006802.mount: Deactivated successfully. Jan 24 00:34:49.068360 containerd[1677]: time="2026-01-24T00:34:49.068304983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:49.070074 containerd[1677]: time="2026-01-24T00:34:49.070050726Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 24 00:34:49.071431 containerd[1677]: time="2026-01-24T00:34:49.071332735Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:49.077231 containerd[1677]: time="2026-01-24T00:34:49.076984549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:49.077866 containerd[1677]: time="2026-01-24T00:34:49.077592314Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.431678535s" Jan 24 00:34:49.077866 containerd[1677]: time="2026-01-24T00:34:49.077624070Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:34:49.078640 containerd[1677]: time="2026-01-24T00:34:49.078624134Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:34:50.358252 containerd[1677]: time="2026-01-24T00:34:50.357929882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:50.359191 containerd[1677]: time="2026-01-24T00:34:50.359032626Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 24 00:34:50.360204 containerd[1677]: time="2026-01-24T00:34:50.360187657Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:50.363084 containerd[1677]: time="2026-01-24T00:34:50.363062830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:50.363833 containerd[1677]: time="2026-01-24T00:34:50.363813448Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.285168477s" Jan 24 00:34:50.363884 containerd[1677]: time="2026-01-24T00:34:50.363837244Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:34:50.364246 containerd[1677]: time="2026-01-24T00:34:50.364232931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:34:51.460090 containerd[1677]: time="2026-01-24T00:34:51.460010839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:51.462074 containerd[1677]: time="2026-01-24T00:34:51.462018059Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 24 00:34:51.463687 containerd[1677]: time="2026-01-24T00:34:51.463642354Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:51.466855 containerd[1677]: time="2026-01-24T00:34:51.466811599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:51.467587 containerd[1677]: time="2026-01-24T00:34:51.467563228Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.103309494s" Jan 24 00:34:51.467628 containerd[1677]: time="2026-01-24T00:34:51.467587509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:34:51.468306 containerd[1677]: time="2026-01-24T00:34:51.468287961Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:34:52.390251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370735939.mount: Deactivated successfully. Jan 24 00:34:52.745830 containerd[1677]: time="2026-01-24T00:34:52.745801206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:52.747876 containerd[1677]: time="2026-01-24T00:34:52.747857242Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 24 00:34:52.748806 containerd[1677]: time="2026-01-24T00:34:52.748789005Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:52.751095 containerd[1677]: time="2026-01-24T00:34:52.751075613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:52.751485 containerd[1677]: time="2026-01-24T00:34:52.751396857Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.283085877s" Jan 24 00:34:52.751547 containerd[1677]: time="2026-01-24T00:34:52.751538499Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:34:52.755519 containerd[1677]: time="2026-01-24T00:34:52.755494617Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:34:53.378070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107710260.mount: Deactivated successfully. Jan 24 00:34:54.000069 containerd[1677]: time="2026-01-24T00:34:54.000015592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:54.001321 containerd[1677]: time="2026-01-24T00:34:54.001298722Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jan 24 00:34:54.001985 containerd[1677]: time="2026-01-24T00:34:54.001955507Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:54.004690 containerd[1677]: time="2026-01-24T00:34:54.004547499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:54.005433 containerd[1677]: time="2026-01-24T00:34:54.005257930Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.249735131s" Jan 24 00:34:54.005433 containerd[1677]: time="2026-01-24T00:34:54.005290289Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:34:54.005823 containerd[1677]: time="2026-01-24T00:34:54.005808774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:34:54.549244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3981567835.mount: Deactivated successfully. Jan 24 00:34:54.559480 containerd[1677]: time="2026-01-24T00:34:54.558909360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:34:54.559632 containerd[1677]: time="2026-01-24T00:34:54.559617179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 24 00:34:54.560753 containerd[1677]: time="2026-01-24T00:34:54.560728729Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:34:54.562622 containerd[1677]: time="2026-01-24T00:34:54.562602236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:34:54.563029 containerd[1677]: time="2026-01-24T00:34:54.563006965Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 557.13778ms" Jan 24 00:34:54.563064 containerd[1677]: time="2026-01-24T00:34:54.563035657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:34:54.563457 containerd[1677]: time="2026-01-24T00:34:54.563440909Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:34:55.105529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827098660.mount: Deactivated successfully. Jan 24 00:34:56.581742 containerd[1677]: time="2026-01-24T00:34:56.581661946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:56.585793 containerd[1677]: time="2026-01-24T00:34:56.585535410Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 24 00:34:56.588118 containerd[1677]: time="2026-01-24T00:34:56.588096253Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:56.590894 containerd[1677]: time="2026-01-24T00:34:56.590863686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:56.591676 containerd[1677]: time="2026-01-24T00:34:56.591650056Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.028146041s" Jan 24 00:34:56.591718 containerd[1677]: time="2026-01-24T00:34:56.591683193Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:34:58.242139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 00:34:58.245389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:34:58.365173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:34:58.369388 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 24 00:34:58.369458 kernel: audit: type=1130 audit(1769214898.364:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:58.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:58.376534 (kubelet)[2406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:34:58.414849 kubelet[2406]: E0124 00:34:58.414812 2406 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:34:58.416786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:34:58.416909 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:34:58.420249 kernel: audit: type=1131 audit(1769214898.416:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:34:58.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:34:58.417270 systemd[1]: kubelet.service: Consumed 132ms CPU time, 110.2M memory peak. Jan 24 00:34:59.630511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:34:59.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:59.630658 systemd[1]: kubelet.service: Consumed 132ms CPU time, 110.2M memory peak. Jan 24 00:34:59.640977 kernel: audit: type=1130 audit(1769214899.629:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:59.641070 kernel: audit: type=1131 audit(1769214899.629:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:59.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:34:59.635239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:34:59.666116 systemd[1]: Reload requested from client PID 2421 ('systemctl') (unit session-10.scope)... Jan 24 00:34:59.666243 systemd[1]: Reloading... Jan 24 00:34:59.779273 zram_generator::config[2463]: No configuration found. Jan 24 00:34:59.955507 systemd[1]: Reloading finished in 288 ms. Jan 24 00:34:59.988498 kernel: audit: type=1334 audit(1769214899.980:289): prog-id=61 op=LOAD Jan 24 00:34:59.988612 kernel: audit: type=1334 audit(1769214899.980:290): prog-id=47 op=UNLOAD Jan 24 00:34:59.980000 audit: BPF prog-id=61 op=LOAD Jan 24 00:34:59.980000 audit: BPF prog-id=47 op=UNLOAD Jan 24 00:34:59.980000 audit: BPF prog-id=62 op=LOAD Jan 24 00:34:59.991227 kernel: audit: type=1334 audit(1769214899.980:291): prog-id=62 op=LOAD Jan 24 00:34:59.980000 audit: BPF prog-id=63 op=LOAD Jan 24 00:34:59.980000 audit: BPF prog-id=48 op=UNLOAD Jan 24 00:34:59.993579 kernel: audit: type=1334 audit(1769214899.980:292): prog-id=63 op=LOAD Jan 24 00:34:59.993622 kernel: audit: type=1334 audit(1769214899.980:293): prog-id=48 op=UNLOAD Jan 24 00:34:59.980000 audit: BPF prog-id=49 op=UNLOAD Jan 24 00:34:59.994792 kernel: audit: type=1334 audit(1769214899.980:294): prog-id=49 op=UNLOAD Jan 24 00:34:59.981000 audit: BPF prog-id=64 op=LOAD Jan 24 00:34:59.981000 audit: BPF prog-id=41 op=UNLOAD Jan 24 00:34:59.981000 audit: BPF prog-id=65 op=LOAD Jan 24 00:34:59.981000 audit: BPF prog-id=66 op=LOAD Jan 24 00:34:59.981000 audit: BPF prog-id=42 op=UNLOAD Jan 24 00:34:59.981000 audit: BPF prog-id=43 op=UNLOAD Jan 24 00:34:59.985000 audit: BPF prog-id=67 op=LOAD Jan 24 00:34:59.985000 audit: BPF prog-id=51 op=UNLOAD Jan 24 00:34:59.985000 audit: BPF prog-id=68 op=LOAD Jan 24 00:34:59.985000 audit: BPF prog-id=69 op=LOAD Jan 24 00:34:59.985000 audit: BPF prog-id=52 op=UNLOAD Jan 24 00:34:59.985000 audit: BPF prog-id=53 op=UNLOAD Jan 24 00:34:59.989000 audit: BPF prog-id=70 op=LOAD Jan 24 00:34:59.989000 audit: BPF prog-id=58 op=UNLOAD Jan 24 00:34:59.989000 audit: BPF prog-id=71 op=LOAD Jan 24 00:34:59.989000 audit: BPF prog-id=72 op=LOAD Jan 24 00:34:59.989000 audit: BPF prog-id=59 op=UNLOAD Jan 24 00:34:59.989000 audit: BPF prog-id=60 op=UNLOAD Jan 24 00:34:59.989000 audit: BPF prog-id=73 op=LOAD Jan 24 00:34:59.989000 audit: BPF prog-id=74 op=LOAD Jan 24 00:34:59.989000 audit: BPF prog-id=54 op=UNLOAD Jan 24 00:34:59.989000 audit: BPF prog-id=55 op=UNLOAD Jan 24 00:34:59.995000 audit: BPF prog-id=75 op=LOAD Jan 24 00:34:59.995000 audit: BPF prog-id=44 op=UNLOAD Jan 24 00:34:59.995000 audit: BPF prog-id=76 op=LOAD Jan 24 00:34:59.995000 audit: BPF prog-id=77 op=LOAD Jan 24 00:34:59.995000 audit: BPF prog-id=45 op=UNLOAD Jan 24 00:34:59.995000 audit: BPF prog-id=46 op=UNLOAD Jan 24 00:34:59.996000 audit: BPF prog-id=78 op=LOAD Jan 24 00:34:59.996000 audit: BPF prog-id=57 op=UNLOAD Jan 24 00:34:59.997000 audit: BPF prog-id=79 op=LOAD Jan 24 00:34:59.997000 audit: BPF prog-id=56 op=UNLOAD Jan 24 00:34:59.997000 audit: BPF prog-id=80 op=LOAD Jan 24 00:34:59.997000 audit: BPF prog-id=50 op=UNLOAD Jan 24 00:35:00.012682 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:35:00.012748 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:35:00.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:35:00.012998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:35:00.013046 systemd[1]: kubelet.service: Consumed 89ms CPU time, 98.5M memory peak. Jan 24 00:35:00.014360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:35:00.146735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:35:00.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:35:00.158483 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:35:00.197225 kubelet[2521]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:35:00.197225 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:35:00.197225 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:35:00.197225 kubelet[2521]: I0124 00:35:00.197130 2521 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:35:00.372630 kubelet[2521]: I0124 00:35:00.372562 2521 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:35:00.372630 kubelet[2521]: I0124 00:35:00.372588 2521 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:35:00.372821 kubelet[2521]: I0124 00:35:00.372804 2521 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:35:00.423989 kubelet[2521]: E0124 00:35:00.423946 2521 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.1.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.1.115:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:35:00.424204 kubelet[2521]: I0124 00:35:00.424181 2521 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:35:00.432360 kubelet[2521]: I0124 00:35:00.432336 2521 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:35:00.434972 kubelet[2521]: I0124 00:35:00.434944 2521 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:35:00.435181 kubelet[2521]: I0124 00:35:00.435145 2521 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:35:00.435351 kubelet[2521]: I0124 00:35:00.435172 2521 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4593-0-0-7-bbab233dcd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:35:00.436113 kubelet[2521]: I0124 00:35:00.436090 2521 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:35:00.436113 kubelet[2521]: I0124 00:35:00.436105 2521 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:35:00.436262 kubelet[2521]: I0124 00:35:00.436248 2521 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:35:00.442972 kubelet[2521]: I0124 00:35:00.442926 2521 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:35:00.442972 kubelet[2521]: I0124 00:35:00.442953 2521 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:35:00.443056 kubelet[2521]: I0124 00:35:00.443022 2521 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:35:00.443056 kubelet[2521]: I0124 00:35:00.443033 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:35:00.446387 kubelet[2521]: W0124 00:35:00.445951 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.1.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4593-0-0-7-bbab233dcd&limit=500&resourceVersion=0": dial tcp 10.0.1.115:6443: connect: connection refused Jan 24 00:35:00.446387 kubelet[2521]: E0124 00:35:00.446025 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.1.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4593-0-0-7-bbab233dcd&limit=500&resourceVersion=0\": dial tcp 10.0.1.115:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:35:00.446552 kubelet[2521]: W0124 00:35:00.446524 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.1.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.1.115:6443: connect: connection refused Jan 24 00:35:00.446595 kubelet[2521]: E0124 00:35:00.446561 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.1.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.1.115:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:35:00.447669 kubelet[2521]: I0124 00:35:00.446890 2521 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:35:00.447669 kubelet[2521]: I0124 00:35:00.447203 2521 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:35:00.447669 kubelet[2521]: W0124 00:35:00.447253 2521 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:35:00.449767 kubelet[2521]: I0124 00:35:00.449746 2521 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:35:00.449828 kubelet[2521]: I0124 00:35:00.449777 2521 server.go:1287] "Started kubelet" Jan 24 00:35:00.452781 kubelet[2521]: I0124 00:35:00.452750 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:35:00.454000 audit[2532]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:00.454000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd28d2eda0 a2=0 a3=0 items=0 ppid=2521 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:35:00.456000 audit[2533]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:00.456000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecbbc64e0 a2=0 a3=0 items=0 ppid=2521 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:35:00.459416 kubelet[2521]: I0124 00:35:00.459385 2521 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:35:00.460269 kubelet[2521]: E0124 00:35:00.459949 2521 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:35:00.460898 kubelet[2521]: I0124 00:35:00.460846 2521 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:35:00.460952 kubelet[2521]: I0124 00:35:00.460880 2521 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:35:00.461127 kubelet[2521]: I0124 00:35:00.461114 2521 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:35:00.461671 kubelet[2521]: I0124 00:35:00.461659 2521 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:35:00.467817 kubelet[2521]: I0124 00:35:00.467793 2521 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:35:00.467923 kubelet[2521]: E0124 00:35:00.467910 2521 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4593-0-0-7-bbab233dcd\" not found" Jan 24 00:35:00.468534 kubelet[2521]: I0124 00:35:00.468524 2521 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:35:00.468618 kubelet[2521]: I0124 00:35:00.468613 2521 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:35:00.470000 audit[2535]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:00.470000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcb4401670 a2=0 a3=0 items=0 ppid=2521 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.470000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:35:00.473006 kubelet[2521]: W0124 00:35:00.472967 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.1.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.1.115:6443: connect: connection refused Jan 24 00:35:00.473058 kubelet[2521]: E0124 00:35:00.473010 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.1.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.1.115:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:35:00.473080 kubelet[2521]: E0124 00:35:00.473067 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.1.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4593-0-0-7-bbab233dcd?timeout=10s\": dial tcp 10.0.1.115:6443: connect: connection refused" interval="200ms" Jan 24 00:35:00.473132 kubelet[2521]: E0124 00:35:00.470902 2521 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.1.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.1.115:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4593-0-0-7-bbab233dcd.188d839408a2d2c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4593-0-0-7-bbab233dcd,UID:ci-4593-0-0-7-bbab233dcd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4593-0-0-7-bbab233dcd,},FirstTimestamp:2026-01-24 00:35:00.449759942 +0000 UTC m=+0.287975839,LastTimestamp:2026-01-24 00:35:00.449759942 +0000 UTC m=+0.287975839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4593-0-0-7-bbab233dcd,}" Jan 24 00:35:00.474065 kubelet[2521]: I0124 00:35:00.473588 2521 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:35:00.474065 kubelet[2521]: I0124 00:35:00.473673 2521 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:35:00.473000 audit[2537]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:00.473000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc7f61f550 a2=0 a3=0 items=0 ppid=2521 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.473000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:35:00.475835 kubelet[2521]: I0124 00:35:00.475823 2521 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:35:00.480000 audit[2541]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:00.480000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcb1bb5730 a2=0 a3=0 items=0 ppid=2521 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 24 00:35:00.481869 kubelet[2521]: I0124 00:35:00.481840 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:35:00.481000 audit[2542]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:00.481000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd7cf12040 a2=0 a3=0 items=0 ppid=2521 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:35:00.485082 kubelet[2521]: I0124 00:35:00.485069 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:35:00.485242 kubelet[2521]: I0124 00:35:00.485235 2521 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:35:00.485303 kubelet[2521]: I0124 00:35:00.485297 2521 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:35:00.485341 kubelet[2521]: I0124 00:35:00.485337 2521 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:35:00.484000 audit[2544]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:00.484000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd7eefd00 a2=0 a3=0 items=0 ppid=2521 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:35:00.486290 kubelet[2521]: E0124 00:35:00.485760 2521 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:35:00.485000 audit[2547]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:00.487096 kubelet[2521]: W0124 00:35:00.487069 2521 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.1.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.1.115:6443: connect: connection refused Jan 24 00:35:00.487160 kubelet[2521]: E0124 00:35:00.487149 2521 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.1.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.1.115:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:35:00.485000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1001ff30 a2=0 a3=0 items=0 ppid=2521 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.485000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:35:00.486000 audit[2548]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:00.486000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc43295b0 a2=0 a3=0 items=0 ppid=2521 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.486000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:35:00.488000 audit[2550]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:00.488000 audit[2550]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd71e1c9a0 a2=0 a3=0 items=0 ppid=2521 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.488000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:35:00.489000 audit[2549]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:00.489000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffea781890 a2=0 a3=0 items=0 ppid=2521 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:35:00.489000 audit[2551]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:00.489000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb87872e0 a2=0 a3=0 items=0 ppid=2521 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:00.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:35:00.496035 kubelet[2521]: I0124 00:35:00.496020 2521 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:35:00.496035 kubelet[2521]: I0124 00:35:00.496032 2521 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:35:00.496121 kubelet[2521]: I0124 00:35:00.496099 2521 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:35:00.498715 kubelet[2521]: I0124 00:35:00.498698 2521 policy_none.go:49] "None policy: Start" Jan 24 00:35:00.498715 kubelet[2521]: I0124 00:35:00.498714 2521 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:35:00.498766 kubelet[2521]: I0124 00:35:00.498724 2521 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:35:00.503866 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:35:00.518941 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:35:00.521839 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:35:00.533577 kubelet[2521]: I0124 00:35:00.532823 2521 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:35:00.533577 kubelet[2521]: I0124 00:35:00.532970 2521 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:35:00.533577 kubelet[2521]: I0124 00:35:00.532978 2521 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:35:00.533577 kubelet[2521]: I0124 00:35:00.533483 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:35:00.534815 kubelet[2521]: E0124 00:35:00.534800 2521 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:35:00.534913 kubelet[2521]: E0124 00:35:00.534906 2521 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4593-0-0-7-bbab233dcd\" not found" Jan 24 00:35:00.594656 systemd[1]: Created slice kubepods-burstable-pod19593c54166ddd6e5976ac4933059e66.slice - libcontainer container kubepods-burstable-pod19593c54166ddd6e5976ac4933059e66.slice. Jan 24 00:35:00.602843 kubelet[2521]: E0124 00:35:00.602822 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-7-bbab233dcd\" not found" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.605521 systemd[1]: Created slice kubepods-burstable-pode6ab8a7eb68fc324e02221d5755a6265.slice - libcontainer container kubepods-burstable-pode6ab8a7eb68fc324e02221d5755a6265.slice. Jan 24 00:35:00.616333 kubelet[2521]: E0124 00:35:00.616308 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-7-bbab233dcd\" not found" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.618810 systemd[1]: Created slice kubepods-burstable-pod2c7a4b6b05bddd2c840c0e29ffdd3885.slice - libcontainer container kubepods-burstable-pod2c7a4b6b05bddd2c840c0e29ffdd3885.slice. Jan 24 00:35:00.620153 kubelet[2521]: E0124 00:35:00.620086 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-7-bbab233dcd\" not found" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.636612 kubelet[2521]: I0124 00:35:00.634980 2521 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.636963 kubelet[2521]: E0124 00:35:00.636938 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.1.115:6443/api/v1/nodes\": dial tcp 10.0.1.115:6443: connect: connection refused" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.670394 kubelet[2521]: I0124 00:35:00.670356 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19593c54166ddd6e5976ac4933059e66-flexvolume-dir\") pod \"kube-controller-manager-ci-4593-0-0-7-bbab233dcd\" (UID: \"19593c54166ddd6e5976ac4933059e66\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.670394 kubelet[2521]: I0124 00:35:00.670392 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19593c54166ddd6e5976ac4933059e66-k8s-certs\") pod \"kube-controller-manager-ci-4593-0-0-7-bbab233dcd\" (UID: \"19593c54166ddd6e5976ac4933059e66\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.670394 kubelet[2521]: I0124 00:35:00.670410 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6ab8a7eb68fc324e02221d5755a6265-kubeconfig\") pod \"kube-scheduler-ci-4593-0-0-7-bbab233dcd\" (UID: \"e6ab8a7eb68fc324e02221d5755a6265\") " pod="kube-system/kube-scheduler-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.670558 kubelet[2521]: I0124 00:35:00.670426 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c7a4b6b05bddd2c840c0e29ffdd3885-k8s-certs\") pod \"kube-apiserver-ci-4593-0-0-7-bbab233dcd\" (UID: \"2c7a4b6b05bddd2c840c0e29ffdd3885\") " pod="kube-system/kube-apiserver-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.670558 kubelet[2521]: I0124 00:35:00.670442 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c7a4b6b05bddd2c840c0e29ffdd3885-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4593-0-0-7-bbab233dcd\" (UID: \"2c7a4b6b05bddd2c840c0e29ffdd3885\") " pod="kube-system/kube-apiserver-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.670558 kubelet[2521]: I0124 00:35:00.670468 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19593c54166ddd6e5976ac4933059e66-ca-certs\") pod \"kube-controller-manager-ci-4593-0-0-7-bbab233dcd\" (UID: \"19593c54166ddd6e5976ac4933059e66\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.670558 kubelet[2521]: I0124 00:35:00.670483 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19593c54166ddd6e5976ac4933059e66-kubeconfig\") pod \"kube-controller-manager-ci-4593-0-0-7-bbab233dcd\" (UID: \"19593c54166ddd6e5976ac4933059e66\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.670558 kubelet[2521]: I0124 00:35:00.670497 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19593c54166ddd6e5976ac4933059e66-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4593-0-0-7-bbab233dcd\" (UID: \"19593c54166ddd6e5976ac4933059e66\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.670686 kubelet[2521]: I0124 00:35:00.670512 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c7a4b6b05bddd2c840c0e29ffdd3885-ca-certs\") pod \"kube-apiserver-ci-4593-0-0-7-bbab233dcd\" (UID: \"2c7a4b6b05bddd2c840c0e29ffdd3885\") " pod="kube-system/kube-apiserver-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.674074 kubelet[2521]: E0124 00:35:00.674051 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.1.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4593-0-0-7-bbab233dcd?timeout=10s\": dial tcp 10.0.1.115:6443: connect: connection refused" interval="400ms" Jan 24 00:35:00.839835 kubelet[2521]: I0124 00:35:00.839764 2521 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.840540 kubelet[2521]: E0124 00:35:00.840460 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.1.115:6443/api/v1/nodes\": dial tcp 10.0.1.115:6443: connect: connection refused" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:00.905750 containerd[1677]: time="2026-01-24T00:35:00.905368935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4593-0-0-7-bbab233dcd,Uid:19593c54166ddd6e5976ac4933059e66,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:00.918541 containerd[1677]: time="2026-01-24T00:35:00.918051075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4593-0-0-7-bbab233dcd,Uid:e6ab8a7eb68fc324e02221d5755a6265,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:00.921639 containerd[1677]: time="2026-01-24T00:35:00.921554245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4593-0-0-7-bbab233dcd,Uid:2c7a4b6b05bddd2c840c0e29ffdd3885,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:00.980458 containerd[1677]: time="2026-01-24T00:35:00.980400532Z" level=info msg="connecting to shim 55ad96b9c730f5fc1e29d58926b5f138f9f0d5343bda5b6d5d3e3002f923936f" address="unix:///run/containerd/s/0cba3ffebf644aa293862e21665a5523aaf422949f33d1c307e5dc482da6a391" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:00.984465 containerd[1677]: time="2026-01-24T00:35:00.984415193Z" level=info msg="connecting to shim bb90f91d3ad539dec6a6c9d66e0e1709a3e64cb82ea2f53b35460613e74268ab" address="unix:///run/containerd/s/e93d2e43bd1ea3b435e02c10fe0738cce215eb003a2a3d849ef372b4364c30ca" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:00.991144 containerd[1677]: time="2026-01-24T00:35:00.991074283Z" level=info msg="connecting to shim c4f23e1b1de3e348b17f9c2e0e8970509ede924f63ca49a08e2726556c1e2f8f" address="unix:///run/containerd/s/9dc864d4726b4bf9a50968401b7d6c54f58039cd7fb131bb7b7a3e6e4d1e91a1" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:01.027563 systemd[1]: Started cri-containerd-55ad96b9c730f5fc1e29d58926b5f138f9f0d5343bda5b6d5d3e3002f923936f.scope - libcontainer container 55ad96b9c730f5fc1e29d58926b5f138f9f0d5343bda5b6d5d3e3002f923936f. Jan 24 00:35:01.041382 systemd[1]: Started cri-containerd-bb90f91d3ad539dec6a6c9d66e0e1709a3e64cb82ea2f53b35460613e74268ab.scope - libcontainer container bb90f91d3ad539dec6a6c9d66e0e1709a3e64cb82ea2f53b35460613e74268ab. Jan 24 00:35:01.046374 systemd[1]: Started cri-containerd-c4f23e1b1de3e348b17f9c2e0e8970509ede924f63ca49a08e2726556c1e2f8f.scope - libcontainer container c4f23e1b1de3e348b17f9c2e0e8970509ede924f63ca49a08e2726556c1e2f8f. Jan 24 00:35:01.050000 audit: BPF prog-id=81 op=LOAD Jan 24 00:35:01.051000 audit: BPF prog-id=82 op=LOAD Jan 24 00:35:01.051000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2570 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535616439366239633733306635666331653239643538393236623566 Jan 24 00:35:01.051000 audit: BPF prog-id=82 op=UNLOAD Jan 24 00:35:01.051000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2570 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535616439366239633733306635666331653239643538393236623566 Jan 24 00:35:01.051000 audit: BPF prog-id=83 op=LOAD Jan 24 00:35:01.051000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2570 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535616439366239633733306635666331653239643538393236623566 Jan 24 00:35:01.051000 audit: BPF prog-id=84 op=LOAD Jan 24 00:35:01.051000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2570 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535616439366239633733306635666331653239643538393236623566 Jan 24 00:35:01.051000 audit: BPF prog-id=84 op=UNLOAD Jan 24 00:35:01.051000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2570 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535616439366239633733306635666331653239643538393236623566 Jan 24 00:35:01.051000 audit: BPF prog-id=83 op=UNLOAD Jan 24 00:35:01.051000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2570 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535616439366239633733306635666331653239643538393236623566 Jan 24 00:35:01.051000 audit: BPF prog-id=85 op=LOAD Jan 24 00:35:01.051000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2570 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535616439366239633733306635666331653239643538393236623566 Jan 24 00:35:01.059000 audit: BPF prog-id=86 op=LOAD Jan 24 00:35:01.060000 audit: BPF prog-id=87 op=LOAD Jan 24 00:35:01.060000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2578 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262393066393164336164353339646563366136633964363665306531 Jan 24 00:35:01.060000 audit: BPF prog-id=87 op=UNLOAD Jan 24 00:35:01.060000 audit[2609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262393066393164336164353339646563366136633964363665306531 Jan 24 00:35:01.060000 audit: BPF prog-id=88 op=LOAD Jan 24 00:35:01.060000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2578 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262393066393164336164353339646563366136633964363665306531 Jan 24 00:35:01.060000 audit: BPF prog-id=89 op=LOAD Jan 24 00:35:01.060000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2578 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262393066393164336164353339646563366136633964363665306531 Jan 24 00:35:01.061000 audit: BPF prog-id=89 op=UNLOAD Jan 24 00:35:01.061000 audit[2609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262393066393164336164353339646563366136633964363665306531 Jan 24 00:35:01.061000 audit: BPF prog-id=88 op=UNLOAD Jan 24 00:35:01.061000 audit[2609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262393066393164336164353339646563366136633964363665306531 Jan 24 00:35:01.061000 audit: BPF prog-id=90 op=LOAD Jan 24 00:35:01.061000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2578 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262393066393164336164353339646563366136633964363665306531 Jan 24 00:35:01.067000 audit: BPF prog-id=91 op=LOAD Jan 24 00:35:01.067000 audit: BPF prog-id=92 op=LOAD Jan 24 00:35:01.067000 audit[2633]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2587 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334663233653162316465336533343862313766396332653065383937 Jan 24 00:35:01.067000 audit: BPF prog-id=92 op=UNLOAD Jan 24 00:35:01.067000 audit[2633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2587 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334663233653162316465336533343862313766396332653065383937 Jan 24 00:35:01.068000 audit: BPF prog-id=93 op=LOAD Jan 24 00:35:01.068000 audit[2633]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2587 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334663233653162316465336533343862313766396332653065383937 Jan 24 00:35:01.068000 audit: BPF prog-id=94 op=LOAD Jan 24 00:35:01.068000 audit[2633]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2587 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334663233653162316465336533343862313766396332653065383937 Jan 24 00:35:01.068000 audit: BPF prog-id=94 op=UNLOAD Jan 24 00:35:01.068000 audit[2633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2587 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334663233653162316465336533343862313766396332653065383937 Jan 24 00:35:01.068000 audit: BPF prog-id=93 op=UNLOAD Jan 24 00:35:01.068000 audit[2633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2587 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334663233653162316465336533343862313766396332653065383937 Jan 24 00:35:01.068000 audit: BPF prog-id=95 op=LOAD Jan 24 00:35:01.068000 audit[2633]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2587 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334663233653162316465336533343862313766396332653065383937 Jan 24 00:35:01.075496 kubelet[2521]: E0124 00:35:01.075442 2521 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.1.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4593-0-0-7-bbab233dcd?timeout=10s\": dial tcp 10.0.1.115:6443: connect: connection refused" interval="800ms" Jan 24 00:35:01.116673 containerd[1677]: time="2026-01-24T00:35:01.116628496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4593-0-0-7-bbab233dcd,Uid:2c7a4b6b05bddd2c840c0e29ffdd3885,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb90f91d3ad539dec6a6c9d66e0e1709a3e64cb82ea2f53b35460613e74268ab\"" Jan 24 00:35:01.118450 containerd[1677]: time="2026-01-24T00:35:01.118366673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4593-0-0-7-bbab233dcd,Uid:19593c54166ddd6e5976ac4933059e66,Namespace:kube-system,Attempt:0,} returns sandbox id \"55ad96b9c730f5fc1e29d58926b5f138f9f0d5343bda5b6d5d3e3002f923936f\"" Jan 24 00:35:01.121800 containerd[1677]: time="2026-01-24T00:35:01.121271133Z" level=info msg="CreateContainer within sandbox \"bb90f91d3ad539dec6a6c9d66e0e1709a3e64cb82ea2f53b35460613e74268ab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:35:01.121800 containerd[1677]: time="2026-01-24T00:35:01.121570499Z" level=info msg="CreateContainer within sandbox \"55ad96b9c730f5fc1e29d58926b5f138f9f0d5343bda5b6d5d3e3002f923936f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:35:01.123273 containerd[1677]: time="2026-01-24T00:35:01.122678218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4593-0-0-7-bbab233dcd,Uid:e6ab8a7eb68fc324e02221d5755a6265,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4f23e1b1de3e348b17f9c2e0e8970509ede924f63ca49a08e2726556c1e2f8f\"" Jan 24 00:35:01.125096 containerd[1677]: time="2026-01-24T00:35:01.125081766Z" level=info msg="CreateContainer within sandbox \"c4f23e1b1de3e348b17f9c2e0e8970509ede924f63ca49a08e2726556c1e2f8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:35:01.133953 containerd[1677]: time="2026-01-24T00:35:01.133932427Z" level=info msg="Container fbddec696fa2c6a34ebb19795b32dfb50b9eab9564904085708b7991a1f34a38: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:35:01.136023 containerd[1677]: time="2026-01-24T00:35:01.136005607Z" level=info msg="Container e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:35:01.138122 containerd[1677]: time="2026-01-24T00:35:01.138105747Z" level=info msg="Container d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:35:01.148653 containerd[1677]: time="2026-01-24T00:35:01.148629766Z" level=info msg="CreateContainer within sandbox \"bb90f91d3ad539dec6a6c9d66e0e1709a3e64cb82ea2f53b35460613e74268ab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fbddec696fa2c6a34ebb19795b32dfb50b9eab9564904085708b7991a1f34a38\"" Jan 24 00:35:01.149621 containerd[1677]: time="2026-01-24T00:35:01.149594471Z" level=info msg="CreateContainer within sandbox \"55ad96b9c730f5fc1e29d58926b5f138f9f0d5343bda5b6d5d3e3002f923936f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e\"" Jan 24 00:35:01.149887 containerd[1677]: time="2026-01-24T00:35:01.149872154Z" level=info msg="StartContainer for \"e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e\"" Jan 24 00:35:01.164933 containerd[1677]: time="2026-01-24T00:35:01.164287696Z" level=info msg="StartContainer for \"fbddec696fa2c6a34ebb19795b32dfb50b9eab9564904085708b7991a1f34a38\"" Jan 24 00:35:01.167039 containerd[1677]: time="2026-01-24T00:35:01.167015603Z" level=info msg="connecting to shim fbddec696fa2c6a34ebb19795b32dfb50b9eab9564904085708b7991a1f34a38" address="unix:///run/containerd/s/e93d2e43bd1ea3b435e02c10fe0738cce215eb003a2a3d849ef372b4364c30ca" protocol=ttrpc version=3 Jan 24 00:35:01.170165 containerd[1677]: time="2026-01-24T00:35:01.168792547Z" level=info msg="connecting to shim e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e" address="unix:///run/containerd/s/0cba3ffebf644aa293862e21665a5523aaf422949f33d1c307e5dc482da6a391" protocol=ttrpc version=3 Jan 24 00:35:01.179100 containerd[1677]: time="2026-01-24T00:35:01.179057621Z" level=info msg="CreateContainer within sandbox \"c4f23e1b1de3e348b17f9c2e0e8970509ede924f63ca49a08e2726556c1e2f8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1\"" Jan 24 00:35:01.184676 containerd[1677]: time="2026-01-24T00:35:01.184596485Z" level=info msg="StartContainer for \"d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1\"" Jan 24 00:35:01.186750 containerd[1677]: time="2026-01-24T00:35:01.186728118Z" level=info msg="connecting to shim d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1" address="unix:///run/containerd/s/9dc864d4726b4bf9a50968401b7d6c54f58039cd7fb131bb7b7a3e6e4d1e91a1" protocol=ttrpc version=3 Jan 24 00:35:01.190387 systemd[1]: Started cri-containerd-fbddec696fa2c6a34ebb19795b32dfb50b9eab9564904085708b7991a1f34a38.scope - libcontainer container fbddec696fa2c6a34ebb19795b32dfb50b9eab9564904085708b7991a1f34a38. Jan 24 00:35:01.202499 systemd[1]: Started cri-containerd-e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e.scope - libcontainer container e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e. Jan 24 00:35:01.214373 systemd[1]: Started cri-containerd-d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1.scope - libcontainer container d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1. Jan 24 00:35:01.215000 audit: BPF prog-id=96 op=LOAD Jan 24 00:35:01.216000 audit: BPF prog-id=97 op=LOAD Jan 24 00:35:01.216000 audit[2694]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2578 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662646465633639366661326336613334656262313937393562333264 Jan 24 00:35:01.216000 audit: BPF prog-id=97 op=UNLOAD Jan 24 00:35:01.216000 audit[2694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662646465633639366661326336613334656262313937393562333264 Jan 24 00:35:01.217000 audit: BPF prog-id=98 op=LOAD Jan 24 00:35:01.217000 audit[2694]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2578 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662646465633639366661326336613334656262313937393562333264 Jan 24 00:35:01.217000 audit: BPF prog-id=99 op=LOAD Jan 24 00:35:01.217000 audit[2694]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2578 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662646465633639366661326336613334656262313937393562333264 Jan 24 00:35:01.217000 audit: BPF prog-id=99 op=UNLOAD Jan 24 00:35:01.217000 audit[2694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662646465633639366661326336613334656262313937393562333264 Jan 24 00:35:01.217000 audit: BPF prog-id=98 op=UNLOAD Jan 24 00:35:01.217000 audit[2694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2578 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662646465633639366661326336613334656262313937393562333264 Jan 24 00:35:01.217000 audit: BPF prog-id=100 op=LOAD Jan 24 00:35:01.217000 audit[2694]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2578 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662646465633639366661326336613334656262313937393562333264 Jan 24 00:35:01.223000 audit: BPF prog-id=101 op=LOAD Jan 24 00:35:01.223000 audit: BPF prog-id=102 op=LOAD Jan 24 00:35:01.223000 audit[2695]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2570 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533643938373966633736313137623139666562656661306532623564 Jan 24 00:35:01.223000 audit: BPF prog-id=102 op=UNLOAD Jan 24 00:35:01.223000 audit[2695]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2570 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533643938373966633736313137623139666562656661306532623564 Jan 24 00:35:01.223000 audit: BPF prog-id=103 op=LOAD Jan 24 00:35:01.223000 audit[2695]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2570 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533643938373966633736313137623139666562656661306532623564 Jan 24 00:35:01.223000 audit: BPF prog-id=104 op=LOAD Jan 24 00:35:01.223000 audit[2695]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2570 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533643938373966633736313137623139666562656661306532623564 Jan 24 00:35:01.223000 audit: BPF prog-id=104 op=UNLOAD Jan 24 00:35:01.223000 audit[2695]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2570 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533643938373966633736313137623139666562656661306532623564 Jan 24 00:35:01.223000 audit: BPF prog-id=103 op=UNLOAD Jan 24 00:35:01.223000 audit[2695]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2570 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533643938373966633736313137623139666562656661306532623564 Jan 24 00:35:01.223000 audit: BPF prog-id=105 op=LOAD Jan 24 00:35:01.223000 audit[2695]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2570 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533643938373966633736313137623139666562656661306532623564 Jan 24 00:35:01.244050 kubelet[2521]: I0124 00:35:01.244029 2521 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:01.245204 kubelet[2521]: E0124 00:35:01.245174 2521 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.1.115:6443/api/v1/nodes\": dial tcp 10.0.1.115:6443: connect: connection refused" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:01.246000 audit: BPF prog-id=106 op=LOAD Jan 24 00:35:01.247000 audit: BPF prog-id=107 op=LOAD Jan 24 00:35:01.247000 audit[2716]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2587 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436323535383966653065626336656464323231313539646431356364 Jan 24 00:35:01.247000 audit: BPF prog-id=107 op=UNLOAD Jan 24 00:35:01.247000 audit[2716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2587 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436323535383966653065626336656464323231313539646431356364 Jan 24 00:35:01.247000 audit: BPF prog-id=108 op=LOAD Jan 24 00:35:01.247000 audit[2716]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2587 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436323535383966653065626336656464323231313539646431356364 Jan 24 00:35:01.247000 audit: BPF prog-id=109 op=LOAD Jan 24 00:35:01.247000 audit[2716]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2587 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436323535383966653065626336656464323231313539646431356364 Jan 24 00:35:01.248000 audit: BPF prog-id=109 op=UNLOAD Jan 24 00:35:01.248000 audit[2716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2587 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436323535383966653065626336656464323231313539646431356364 Jan 24 00:35:01.248000 audit: BPF prog-id=108 op=UNLOAD Jan 24 00:35:01.248000 audit[2716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2587 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436323535383966653065626336656464323231313539646431356364 Jan 24 00:35:01.248000 audit: BPF prog-id=110 op=LOAD Jan 24 00:35:01.248000 audit[2716]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2587 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:01.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436323535383966653065626336656464323231313539646431356364 Jan 24 00:35:01.273615 containerd[1677]: time="2026-01-24T00:35:01.273581564Z" level=info msg="StartContainer for \"fbddec696fa2c6a34ebb19795b32dfb50b9eab9564904085708b7991a1f34a38\" returns successfully" Jan 24 00:35:01.278182 containerd[1677]: time="2026-01-24T00:35:01.278155380Z" level=info msg="StartContainer for \"e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e\" returns successfully" Jan 24 00:35:01.316562 containerd[1677]: time="2026-01-24T00:35:01.316528989Z" level=info msg="StartContainer for \"d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1\" returns successfully" Jan 24 00:35:01.501834 kubelet[2521]: E0124 00:35:01.501809 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-7-bbab233dcd\" not found" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:01.502303 kubelet[2521]: E0124 00:35:01.502130 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-7-bbab233dcd\" not found" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:01.503696 kubelet[2521]: E0124 00:35:01.503682 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-7-bbab233dcd\" not found" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:02.047692 kubelet[2521]: I0124 00:35:02.047671 2521 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:02.505794 kubelet[2521]: E0124 00:35:02.505769 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-7-bbab233dcd\" not found" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:02.506431 kubelet[2521]: E0124 00:35:02.506415 2521 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-7-bbab233dcd\" not found" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:02.642133 kubelet[2521]: E0124 00:35:02.642100 2521 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4593-0-0-7-bbab233dcd\" not found" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:02.731082 kubelet[2521]: I0124 00:35:02.731052 2521 kubelet_node_status.go:78] "Successfully registered node" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:02.731082 kubelet[2521]: E0124 00:35:02.731082 2521 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4593-0-0-7-bbab233dcd\": node \"ci-4593-0-0-7-bbab233dcd\" not found" Jan 24 00:35:02.768303 kubelet[2521]: I0124 00:35:02.768182 2521 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:02.773216 kubelet[2521]: E0124 00:35:02.773165 2521 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4593-0-0-7-bbab233dcd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:02.773216 kubelet[2521]: I0124 00:35:02.773187 2521 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:02.774768 kubelet[2521]: E0124 00:35:02.774744 2521 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4593-0-0-7-bbab233dcd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:02.774768 kubelet[2521]: I0124 00:35:02.774761 2521 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:02.775975 kubelet[2521]: E0124 00:35:02.775954 2521 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4593-0-0-7-bbab233dcd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:03.447865 kubelet[2521]: I0124 00:35:03.447824 2521 apiserver.go:52] "Watching apiserver" Jan 24 00:35:03.469193 kubelet[2521]: I0124 00:35:03.469157 2521 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:35:05.119294 systemd[1]: Reload requested from client PID 2790 ('systemctl') (unit session-10.scope)... Jan 24 00:35:05.119742 systemd[1]: Reloading... Jan 24 00:35:05.213234 zram_generator::config[2836]: No configuration found. Jan 24 00:35:05.407870 systemd[1]: Reloading finished in 287 ms. Jan 24 00:35:05.429666 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:35:05.442973 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:35:05.443260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:35:05.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:35:05.444351 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 24 00:35:05.444426 kernel: audit: type=1131 audit(1769214905.442:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:35:05.446274 systemd[1]: kubelet.service: Consumed 577ms CPU time, 131.4M memory peak. Jan 24 00:35:05.448107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:35:05.447000 audit: BPF prog-id=111 op=LOAD Jan 24 00:35:05.447000 audit: BPF prog-id=80 op=UNLOAD Jan 24 00:35:05.448000 audit: BPF prog-id=112 op=LOAD Jan 24 00:35:05.452574 kernel: audit: type=1334 audit(1769214905.447:392): prog-id=111 op=LOAD Jan 24 00:35:05.452619 kernel: audit: type=1334 audit(1769214905.447:393): prog-id=80 op=UNLOAD Jan 24 00:35:05.452638 kernel: audit: type=1334 audit(1769214905.448:394): prog-id=112 op=LOAD Jan 24 00:35:05.448000 audit: BPF prog-id=113 op=LOAD Jan 24 00:35:05.448000 audit: BPF prog-id=73 op=UNLOAD Jan 24 00:35:05.456527 kernel: audit: type=1334 audit(1769214905.448:395): prog-id=113 op=LOAD Jan 24 00:35:05.456563 kernel: audit: type=1334 audit(1769214905.448:396): prog-id=73 op=UNLOAD Jan 24 00:35:05.448000 audit: BPF prog-id=74 op=UNLOAD Jan 24 00:35:05.457558 kernel: audit: type=1334 audit(1769214905.448:397): prog-id=74 op=UNLOAD Jan 24 00:35:05.449000 audit: BPF prog-id=114 op=LOAD Jan 24 00:35:05.458526 kernel: audit: type=1334 audit(1769214905.449:398): prog-id=114 op=LOAD Jan 24 00:35:05.449000 audit: BPF prog-id=67 op=UNLOAD Jan 24 00:35:05.459476 kernel: audit: type=1334 audit(1769214905.449:399): prog-id=67 op=UNLOAD Jan 24 00:35:05.449000 audit: BPF prog-id=115 op=LOAD Jan 24 00:35:05.460503 kernel: audit: type=1334 audit(1769214905.449:400): prog-id=115 op=LOAD Jan 24 00:35:05.449000 audit: BPF prog-id=116 op=LOAD Jan 24 00:35:05.449000 audit: BPF prog-id=68 op=UNLOAD Jan 24 00:35:05.449000 audit: BPF prog-id=69 op=UNLOAD Jan 24 00:35:05.449000 audit: BPF prog-id=117 op=LOAD Jan 24 00:35:05.449000 audit: BPF prog-id=64 op=UNLOAD Jan 24 00:35:05.449000 audit: BPF prog-id=118 op=LOAD Jan 24 00:35:05.449000 audit: BPF prog-id=119 op=LOAD Jan 24 00:35:05.449000 audit: BPF prog-id=65 op=UNLOAD Jan 24 00:35:05.449000 audit: BPF prog-id=66 op=UNLOAD Jan 24 00:35:05.450000 audit: BPF prog-id=120 op=LOAD Jan 24 00:35:05.450000 audit: BPF prog-id=61 op=UNLOAD Jan 24 00:35:05.450000 audit: BPF prog-id=121 op=LOAD Jan 24 00:35:05.450000 audit: BPF prog-id=122 op=LOAD Jan 24 00:35:05.450000 audit: BPF prog-id=62 op=UNLOAD Jan 24 00:35:05.450000 audit: BPF prog-id=63 op=UNLOAD Jan 24 00:35:05.451000 audit: BPF prog-id=123 op=LOAD Jan 24 00:35:05.451000 audit: BPF prog-id=78 op=UNLOAD Jan 24 00:35:05.451000 audit: BPF prog-id=124 op=LOAD Jan 24 00:35:05.451000 audit: BPF prog-id=79 op=UNLOAD Jan 24 00:35:05.453000 audit: BPF prog-id=125 op=LOAD Jan 24 00:35:05.453000 audit: BPF prog-id=70 op=UNLOAD Jan 24 00:35:05.454000 audit: BPF prog-id=126 op=LOAD Jan 24 00:35:05.454000 audit: BPF prog-id=127 op=LOAD Jan 24 00:35:05.454000 audit: BPF prog-id=71 op=UNLOAD Jan 24 00:35:05.454000 audit: BPF prog-id=72 op=UNLOAD Jan 24 00:35:05.455000 audit: BPF prog-id=128 op=LOAD Jan 24 00:35:05.455000 audit: BPF prog-id=75 op=UNLOAD Jan 24 00:35:05.455000 audit: BPF prog-id=129 op=LOAD Jan 24 00:35:05.455000 audit: BPF prog-id=130 op=LOAD Jan 24 00:35:05.455000 audit: BPF prog-id=76 op=UNLOAD Jan 24 00:35:05.455000 audit: BPF prog-id=77 op=UNLOAD Jan 24 00:35:05.566436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:35:05.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:35:05.574479 (kubelet)[2887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:35:05.615687 kubelet[2887]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:35:05.616019 kubelet[2887]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:35:05.616063 kubelet[2887]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:35:05.616267 kubelet[2887]: I0124 00:35:05.616243 2887 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:35:05.622731 kubelet[2887]: I0124 00:35:05.622702 2887 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:35:05.622833 kubelet[2887]: I0124 00:35:05.622827 2887 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:35:05.623065 kubelet[2887]: I0124 00:35:05.623056 2887 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:35:05.624122 kubelet[2887]: I0124 00:35:05.624087 2887 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:35:05.626106 kubelet[2887]: I0124 00:35:05.626091 2887 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:35:05.629683 kubelet[2887]: I0124 00:35:05.629668 2887 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:35:05.632460 kubelet[2887]: I0124 00:35:05.632443 2887 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:35:05.632744 kubelet[2887]: I0124 00:35:05.632686 2887 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:35:05.632917 kubelet[2887]: I0124 00:35:05.632714 2887 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4593-0-0-7-bbab233dcd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:35:05.633082 kubelet[2887]: I0124 00:35:05.632927 2887 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:35:05.633082 kubelet[2887]: I0124 00:35:05.632936 2887 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:35:05.633082 kubelet[2887]: I0124 00:35:05.632973 2887 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:35:05.633152 kubelet[2887]: I0124 00:35:05.633097 2887 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:35:05.633152 kubelet[2887]: I0124 00:35:05.633113 2887 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:35:05.633152 kubelet[2887]: I0124 00:35:05.633133 2887 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:35:05.633152 kubelet[2887]: I0124 00:35:05.633140 2887 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:35:05.634862 kubelet[2887]: I0124 00:35:05.634786 2887 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:35:05.635554 kubelet[2887]: I0124 00:35:05.635544 2887 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:35:05.636620 kubelet[2887]: I0124 00:35:05.636610 2887 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:35:05.636704 kubelet[2887]: I0124 00:35:05.636698 2887 server.go:1287] "Started kubelet" Jan 24 00:35:05.639083 kubelet[2887]: I0124 00:35:05.639073 2887 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:35:05.641238 kubelet[2887]: I0124 00:35:05.641059 2887 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:35:05.641944 kubelet[2887]: I0124 00:35:05.641919 2887 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:35:05.642808 kubelet[2887]: I0124 00:35:05.642769 2887 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:35:05.642938 kubelet[2887]: I0124 00:35:05.642923 2887 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:35:05.643647 kubelet[2887]: I0124 00:35:05.643629 2887 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:35:05.646471 kubelet[2887]: I0124 00:35:05.646298 2887 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:35:05.650257 kubelet[2887]: E0124 00:35:05.650237 2887 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4593-0-0-7-bbab233dcd\" not found" Jan 24 00:35:05.650568 kubelet[2887]: I0124 00:35:05.650558 2887 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:35:05.650646 kubelet[2887]: I0124 00:35:05.650639 2887 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:35:05.653400 kubelet[2887]: I0124 00:35:05.653383 2887 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:35:05.654558 kubelet[2887]: I0124 00:35:05.654343 2887 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:35:05.654558 kubelet[2887]: I0124 00:35:05.654365 2887 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:35:05.654558 kubelet[2887]: I0124 00:35:05.654379 2887 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:35:05.654558 kubelet[2887]: I0124 00:35:05.654387 2887 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:35:05.654558 kubelet[2887]: E0124 00:35:05.654422 2887 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:35:05.662084 kubelet[2887]: I0124 00:35:05.661105 2887 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:35:05.665221 kubelet[2887]: I0124 00:35:05.664477 2887 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:35:05.665221 kubelet[2887]: I0124 00:35:05.664488 2887 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:35:05.668855 kubelet[2887]: E0124 00:35:05.668829 2887 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:35:05.701238 kubelet[2887]: I0124 00:35:05.700856 2887 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:35:05.701238 kubelet[2887]: I0124 00:35:05.700869 2887 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:35:05.701238 kubelet[2887]: I0124 00:35:05.700890 2887 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:35:05.701238 kubelet[2887]: I0124 00:35:05.701013 2887 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:35:05.701238 kubelet[2887]: I0124 00:35:05.701021 2887 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:35:05.701238 kubelet[2887]: I0124 00:35:05.701035 2887 policy_none.go:49] "None policy: Start" Jan 24 00:35:05.701238 kubelet[2887]: I0124 00:35:05.701043 2887 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:35:05.701238 kubelet[2887]: I0124 00:35:05.701050 2887 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:35:05.701238 kubelet[2887]: I0124 00:35:05.701143 2887 state_mem.go:75] "Updated machine memory state" Jan 24 00:35:05.704626 kubelet[2887]: I0124 00:35:05.704608 2887 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:35:05.705238 kubelet[2887]: I0124 00:35:05.705229 2887 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:35:05.705325 kubelet[2887]: I0124 00:35:05.705304 2887 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:35:05.705731 kubelet[2887]: I0124 00:35:05.705519 2887 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:35:05.705782 kubelet[2887]: E0124 00:35:05.705773 2887 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:35:05.755720 kubelet[2887]: I0124 00:35:05.755693 2887 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.756219 kubelet[2887]: I0124 00:35:05.755968 2887 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.756291 kubelet[2887]: I0124 00:35:05.756046 2887 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.807764 kubelet[2887]: I0124 00:35:05.807743 2887 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.818647 kubelet[2887]: I0124 00:35:05.818622 2887 kubelet_node_status.go:124] "Node was previously registered" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.818786 kubelet[2887]: I0124 00:35:05.818688 2887 kubelet_node_status.go:78] "Successfully registered node" node="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.952263 kubelet[2887]: I0124 00:35:05.952138 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19593c54166ddd6e5976ac4933059e66-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4593-0-0-7-bbab233dcd\" (UID: \"19593c54166ddd6e5976ac4933059e66\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.952263 kubelet[2887]: I0124 00:35:05.952179 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19593c54166ddd6e5976ac4933059e66-flexvolume-dir\") pod \"kube-controller-manager-ci-4593-0-0-7-bbab233dcd\" (UID: \"19593c54166ddd6e5976ac4933059e66\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.952263 kubelet[2887]: I0124 00:35:05.952200 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19593c54166ddd6e5976ac4933059e66-k8s-certs\") pod \"kube-controller-manager-ci-4593-0-0-7-bbab233dcd\" (UID: \"19593c54166ddd6e5976ac4933059e66\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.952263 kubelet[2887]: I0124 00:35:05.952236 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c7a4b6b05bddd2c840c0e29ffdd3885-k8s-certs\") pod \"kube-apiserver-ci-4593-0-0-7-bbab233dcd\" (UID: \"2c7a4b6b05bddd2c840c0e29ffdd3885\") " pod="kube-system/kube-apiserver-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.952263 kubelet[2887]: I0124 00:35:05.952256 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c7a4b6b05bddd2c840c0e29ffdd3885-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4593-0-0-7-bbab233dcd\" (UID: \"2c7a4b6b05bddd2c840c0e29ffdd3885\") " pod="kube-system/kube-apiserver-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.952487 kubelet[2887]: I0124 00:35:05.952274 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19593c54166ddd6e5976ac4933059e66-ca-certs\") pod \"kube-controller-manager-ci-4593-0-0-7-bbab233dcd\" (UID: \"19593c54166ddd6e5976ac4933059e66\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.952487 kubelet[2887]: I0124 00:35:05.952296 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19593c54166ddd6e5976ac4933059e66-kubeconfig\") pod \"kube-controller-manager-ci-4593-0-0-7-bbab233dcd\" (UID: \"19593c54166ddd6e5976ac4933059e66\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.952487 kubelet[2887]: I0124 00:35:05.952313 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6ab8a7eb68fc324e02221d5755a6265-kubeconfig\") pod \"kube-scheduler-ci-4593-0-0-7-bbab233dcd\" (UID: \"e6ab8a7eb68fc324e02221d5755a6265\") " pod="kube-system/kube-scheduler-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:05.952487 kubelet[2887]: I0124 00:35:05.952331 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c7a4b6b05bddd2c840c0e29ffdd3885-ca-certs\") pod \"kube-apiserver-ci-4593-0-0-7-bbab233dcd\" (UID: \"2c7a4b6b05bddd2c840c0e29ffdd3885\") " pod="kube-system/kube-apiserver-ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:06.644828 kubelet[2887]: I0124 00:35:06.644791 2887 apiserver.go:52] "Watching apiserver" Jan 24 00:35:06.650663 kubelet[2887]: I0124 00:35:06.650638 2887 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:35:06.706270 kubelet[2887]: I0124 00:35:06.706120 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4593-0-0-7-bbab233dcd" podStartSLOduration=1.706090181 podStartE2EDuration="1.706090181s" podCreationTimestamp="2026-01-24 00:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:35:06.706065144 +0000 UTC m=+1.126996472" watchObservedRunningTime="2026-01-24 00:35:06.706090181 +0000 UTC m=+1.127021487" Jan 24 00:35:06.716352 kubelet[2887]: I0124 00:35:06.716238 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4593-0-0-7-bbab233dcd" podStartSLOduration=1.7162238269999999 podStartE2EDuration="1.716223827s" podCreationTimestamp="2026-01-24 00:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:35:06.716059529 +0000 UTC m=+1.136990849" watchObservedRunningTime="2026-01-24 00:35:06.716223827 +0000 UTC m=+1.137155147" Jan 24 00:35:06.734189 kubelet[2887]: I0124 00:35:06.734149 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4593-0-0-7-bbab233dcd" podStartSLOduration=1.734134199 podStartE2EDuration="1.734134199s" podCreationTimestamp="2026-01-24 00:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:35:06.725035778 +0000 UTC m=+1.145967100" watchObservedRunningTime="2026-01-24 00:35:06.734134199 +0000 UTC m=+1.155065522" Jan 24 00:35:09.309510 update_engine[1657]: I20260124 00:35:09.309418 1657 update_attempter.cc:509] Updating boot flags... Jan 24 00:35:11.712118 kubelet[2887]: I0124 00:35:11.711978 2887 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:35:11.713537 kubelet[2887]: I0124 00:35:11.713399 2887 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:35:11.713582 containerd[1677]: time="2026-01-24T00:35:11.712969956Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:35:12.494001 systemd[1]: Created slice kubepods-besteffort-podd8d7540f_2cd8_48b6_936e_a4e7d549d03a.slice - libcontainer container kubepods-besteffort-podd8d7540f_2cd8_48b6_936e_a4e7d549d03a.slice. Jan 24 00:35:12.591181 kubelet[2887]: I0124 00:35:12.591131 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d8d7540f-2cd8-48b6-936e-a4e7d549d03a-kube-proxy\") pod \"kube-proxy-8pm2g\" (UID: \"d8d7540f-2cd8-48b6-936e-a4e7d549d03a\") " pod="kube-system/kube-proxy-8pm2g" Jan 24 00:35:12.591181 kubelet[2887]: I0124 00:35:12.591176 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8d7540f-2cd8-48b6-936e-a4e7d549d03a-lib-modules\") pod \"kube-proxy-8pm2g\" (UID: \"d8d7540f-2cd8-48b6-936e-a4e7d549d03a\") " pod="kube-system/kube-proxy-8pm2g" Jan 24 00:35:12.591380 kubelet[2887]: I0124 00:35:12.591196 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjwt\" (UniqueName: \"kubernetes.io/projected/d8d7540f-2cd8-48b6-936e-a4e7d549d03a-kube-api-access-jvjwt\") pod \"kube-proxy-8pm2g\" (UID: \"d8d7540f-2cd8-48b6-936e-a4e7d549d03a\") " pod="kube-system/kube-proxy-8pm2g" Jan 24 00:35:12.591380 kubelet[2887]: I0124 00:35:12.591234 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8d7540f-2cd8-48b6-936e-a4e7d549d03a-xtables-lock\") pod \"kube-proxy-8pm2g\" (UID: \"d8d7540f-2cd8-48b6-936e-a4e7d549d03a\") " pod="kube-system/kube-proxy-8pm2g" Jan 24 00:35:12.804939 containerd[1677]: time="2026-01-24T00:35:12.804712769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pm2g,Uid:d8d7540f-2cd8-48b6-936e-a4e7d549d03a,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:12.828634 containerd[1677]: time="2026-01-24T00:35:12.828597499Z" level=info msg="connecting to shim cbbcb7a50e9fe6dbf08af17a047e33a93fa9e9933da1f82d85f6bdd020974fb1" address="unix:///run/containerd/s/5ec2aa665eaf829c5312cfb381af8a01757223929e205c028f508e4caede0a14" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:12.863627 systemd[1]: Started cri-containerd-cbbcb7a50e9fe6dbf08af17a047e33a93fa9e9933da1f82d85f6bdd020974fb1.scope - libcontainer container cbbcb7a50e9fe6dbf08af17a047e33a93fa9e9933da1f82d85f6bdd020974fb1. Jan 24 00:35:12.877059 systemd[1]: Created slice kubepods-besteffort-pod4db2676b_40da_429b_8198_8875a9e56f27.slice - libcontainer container kubepods-besteffort-pod4db2676b_40da_429b_8198_8875a9e56f27.slice. Jan 24 00:35:12.884175 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 24 00:35:12.884276 kernel: audit: type=1334 audit(1769214912.880:433): prog-id=131 op=LOAD Jan 24 00:35:12.880000 audit: BPF prog-id=131 op=LOAD Jan 24 00:35:12.883000 audit: BPF prog-id=132 op=LOAD Jan 24 00:35:12.883000 audit[2964]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:12.887760 kernel: audit: type=1334 audit(1769214912.883:434): prog-id=132 op=LOAD Jan 24 00:35:12.887801 kernel: audit: type=1300 audit(1769214912.883:434): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:12.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362626362376135306539666536646266303861663137613034376533 Jan 24 00:35:12.894031 kubelet[2887]: I0124 00:35:12.893963 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdzf\" (UniqueName: \"kubernetes.io/projected/4db2676b-40da-429b-8198-8875a9e56f27-kube-api-access-bmdzf\") pod \"tigera-operator-7dcd859c48-g7hzl\" (UID: \"4db2676b-40da-429b-8198-8875a9e56f27\") " pod="tigera-operator/tigera-operator-7dcd859c48-g7hzl" Jan 24 00:35:12.894031 kubelet[2887]: I0124 00:35:12.893994 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4db2676b-40da-429b-8198-8875a9e56f27-var-lib-calico\") pod \"tigera-operator-7dcd859c48-g7hzl\" (UID: \"4db2676b-40da-429b-8198-8875a9e56f27\") " pod="tigera-operator/tigera-operator-7dcd859c48-g7hzl" Jan 24 00:35:12.895276 kernel: audit: type=1327 audit(1769214912.883:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362626362376135306539666536646266303861663137613034376533 Jan 24 00:35:12.895320 kernel: audit: type=1334 audit(1769214912.883:435): prog-id=132 op=UNLOAD Jan 24 00:35:12.883000 audit: BPF prog-id=132 op=UNLOAD Jan 24 00:35:12.883000 audit[2964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:12.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362626362376135306539666536646266303861663137613034376533 Jan 24 00:35:12.901332 kernel: audit: type=1300 audit(1769214912.883:435): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:12.901392 kernel: audit: type=1327 audit(1769214912.883:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362626362376135306539666536646266303861663137613034376533 Jan 24 00:35:12.883000 audit: BPF prog-id=133 op=LOAD Jan 24 00:35:12.904751 kernel: audit: type=1334 audit(1769214912.883:436): prog-id=133 op=LOAD Jan 24 00:35:12.904798 kernel: audit: type=1300 audit(1769214912.883:436): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:12.883000 audit[2964]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:12.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362626362376135306539666536646266303861663137613034376533 Jan 24 00:35:12.909677 kernel: audit: type=1327 audit(1769214912.883:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362626362376135306539666536646266303861663137613034376533 Jan 24 00:35:12.883000 audit: BPF prog-id=134 op=LOAD Jan 24 00:35:12.883000 audit[2964]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:12.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362626362376135306539666536646266303861663137613034376533 Jan 24 00:35:12.883000 audit: BPF prog-id=134 op=UNLOAD Jan 24 00:35:12.883000 audit[2964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:12.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362626362376135306539666536646266303861663137613034376533 Jan 24 00:35:12.883000 audit: BPF prog-id=133 op=UNLOAD Jan 24 00:35:12.883000 audit[2964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:12.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362626362376135306539666536646266303861663137613034376533 Jan 24 00:35:12.883000 audit: BPF prog-id=135 op=LOAD Jan 24 00:35:12.883000 audit[2964]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:12.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362626362376135306539666536646266303861663137613034376533 Jan 24 00:35:12.916829 containerd[1677]: time="2026-01-24T00:35:12.916784783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pm2g,Uid:d8d7540f-2cd8-48b6-936e-a4e7d549d03a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbbcb7a50e9fe6dbf08af17a047e33a93fa9e9933da1f82d85f6bdd020974fb1\"" Jan 24 00:35:12.919222 containerd[1677]: time="2026-01-24T00:35:12.919192085Z" level=info msg="CreateContainer within sandbox \"cbbcb7a50e9fe6dbf08af17a047e33a93fa9e9933da1f82d85f6bdd020974fb1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:35:12.931677 containerd[1677]: time="2026-01-24T00:35:12.931656914Z" level=info msg="Container ec292d22630e1f16a32abe574d66ee53484a1ad9c42ca94b772b8d2a1c292357: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:35:12.934376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840073878.mount: Deactivated successfully. Jan 24 00:35:12.941693 containerd[1677]: time="2026-01-24T00:35:12.941654206Z" level=info msg="CreateContainer within sandbox \"cbbcb7a50e9fe6dbf08af17a047e33a93fa9e9933da1f82d85f6bdd020974fb1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec292d22630e1f16a32abe574d66ee53484a1ad9c42ca94b772b8d2a1c292357\"" Jan 24 00:35:12.943023 containerd[1677]: time="2026-01-24T00:35:12.942129944Z" level=info msg="StartContainer for \"ec292d22630e1f16a32abe574d66ee53484a1ad9c42ca94b772b8d2a1c292357\"" Jan 24 00:35:12.944103 containerd[1677]: time="2026-01-24T00:35:12.944080471Z" level=info msg="connecting to shim ec292d22630e1f16a32abe574d66ee53484a1ad9c42ca94b772b8d2a1c292357" address="unix:///run/containerd/s/5ec2aa665eaf829c5312cfb381af8a01757223929e205c028f508e4caede0a14" protocol=ttrpc version=3 Jan 24 00:35:12.959392 systemd[1]: Started cri-containerd-ec292d22630e1f16a32abe574d66ee53484a1ad9c42ca94b772b8d2a1c292357.scope - libcontainer container ec292d22630e1f16a32abe574d66ee53484a1ad9c42ca94b772b8d2a1c292357. Jan 24 00:35:13.001000 audit: BPF prog-id=136 op=LOAD Jan 24 00:35:13.001000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2953 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323932643232363330653166313661333261626535373464363665 Jan 24 00:35:13.001000 audit: BPF prog-id=137 op=LOAD Jan 24 00:35:13.001000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2953 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323932643232363330653166313661333261626535373464363665 Jan 24 00:35:13.001000 audit: BPF prog-id=137 op=UNLOAD Jan 24 00:35:13.001000 audit[2992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323932643232363330653166313661333261626535373464363665 Jan 24 00:35:13.001000 audit: BPF prog-id=136 op=UNLOAD Jan 24 00:35:13.001000 audit[2992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323932643232363330653166313661333261626535373464363665 Jan 24 00:35:13.001000 audit: BPF prog-id=138 op=LOAD Jan 24 00:35:13.001000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2953 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323932643232363330653166313661333261626535373464363665 Jan 24 00:35:13.021072 containerd[1677]: time="2026-01-24T00:35:13.021039539Z" level=info msg="StartContainer for \"ec292d22630e1f16a32abe574d66ee53484a1ad9c42ca94b772b8d2a1c292357\" returns successfully" Jan 24 00:35:13.128000 audit[3055]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.128000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcbdb61d30 a2=0 a3=7ffcbdb61d1c items=0 ppid=3004 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:35:13.130000 audit[3057]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.130000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1ab15840 a2=0 a3=7ffe1ab1582c items=0 ppid=3004 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:35:13.131000 audit[3054]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.131000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff98f07b70 a2=0 a3=7fff98f07b5c items=0 ppid=3004 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.131000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:35:13.131000 audit[3058]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.131000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff496bfbe0 a2=0 a3=7fff496bfbcc items=0 ppid=3004 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.131000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:35:13.134000 audit[3059]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.134000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff356d1c00 a2=0 a3=7fff356d1bec items=0 ppid=3004 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.134000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:35:13.135000 audit[3060]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.135000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3eb0a2b0 a2=0 a3=7ffe3eb0a29c items=0 ppid=3004 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.135000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:35:13.180310 containerd[1677]: time="2026-01-24T00:35:13.180271311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g7hzl,Uid:4db2676b-40da-429b-8198-8875a9e56f27,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:35:13.204225 containerd[1677]: time="2026-01-24T00:35:13.204151488Z" level=info msg="connecting to shim f1a29b6028b971af70e75ce7fb9b76d552050963087f1b0cd716e711e63775b8" address="unix:///run/containerd/s/47d84dd0a6390badaa3a9edfb56876fa8b14b74c62046063d10dbce964cca601" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:13.227000 audit[3094]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.227000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc10ca4b00 a2=0 a3=7ffc10ca4aec items=0 ppid=3004 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:35:13.229482 systemd[1]: Started cri-containerd-f1a29b6028b971af70e75ce7fb9b76d552050963087f1b0cd716e711e63775b8.scope - libcontainer container f1a29b6028b971af70e75ce7fb9b76d552050963087f1b0cd716e711e63775b8. Jan 24 00:35:13.231000 audit[3096]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.231000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffca9b6bf10 a2=0 a3=7ffca9b6befc items=0 ppid=3004 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.231000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 24 00:35:13.235000 audit[3101]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.235000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcbc1517c0 a2=0 a3=7ffcbc1517ac items=0 ppid=3004 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 24 00:35:13.237000 audit[3107]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.237000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff26345300 a2=0 a3=7fff263452ec items=0 ppid=3004 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.237000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:35:13.240000 audit[3109]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.240000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe8b83ade0 a2=0 a3=7ffe8b83adcc items=0 ppid=3004 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.240000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:35:13.242000 audit[3110]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.242000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd87110fb0 a2=0 a3=7ffd87110f9c items=0 ppid=3004 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:35:13.243000 audit: BPF prog-id=139 op=LOAD Jan 24 00:35:13.244000 audit: BPF prog-id=140 op=LOAD Jan 24 00:35:13.244000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3070 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613239623630323862393731616637306537356365376662396237 Jan 24 00:35:13.244000 audit: BPF prog-id=140 op=UNLOAD Jan 24 00:35:13.244000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3070 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613239623630323862393731616637306537356365376662396237 Jan 24 00:35:13.245000 audit: BPF prog-id=141 op=LOAD Jan 24 00:35:13.245000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3070 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613239623630323862393731616637306537356365376662396237 Jan 24 00:35:13.245000 audit: BPF prog-id=142 op=LOAD Jan 24 00:35:13.245000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3070 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613239623630323862393731616637306537356365376662396237 Jan 24 00:35:13.245000 audit: BPF prog-id=142 op=UNLOAD Jan 24 00:35:13.245000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3070 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613239623630323862393731616637306537356365376662396237 Jan 24 00:35:13.245000 audit: BPF prog-id=141 op=UNLOAD Jan 24 00:35:13.245000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3070 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613239623630323862393731616637306537356365376662396237 Jan 24 00:35:13.246000 audit[3112]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.246000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdfb576480 a2=0 a3=7ffdfb57646c items=0 ppid=3004 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.246000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 24 00:35:13.246000 audit: BPF prog-id=143 op=LOAD Jan 24 00:35:13.246000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3070 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613239623630323862393731616637306537356365376662396237 Jan 24 00:35:13.250000 audit[3115]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.250000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd905e2440 a2=0 a3=7ffd905e242c items=0 ppid=3004 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.250000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 24 00:35:13.252000 audit[3116]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.252000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5cc547f0 a2=0 a3=7fff5cc547dc items=0 ppid=3004 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:35:13.254000 audit[3118]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.254000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe0cb79380 a2=0 a3=7ffe0cb7936c items=0 ppid=3004 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.254000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:35:13.256000 audit[3119]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.256000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee2208bf0 a2=0 a3=7ffee2208bdc items=0 ppid=3004 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.256000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:35:13.259000 audit[3121]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.259000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0c1f1f40 a2=0 a3=7ffe0c1f1f2c items=0 ppid=3004 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:35:13.263000 audit[3124]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.263000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc8ff52b70 a2=0 a3=7ffc8ff52b5c items=0 ppid=3004 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:35:13.266000 audit[3127]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.266000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff6873060 a2=0 a3=7ffff687304c items=0 ppid=3004 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 24 00:35:13.267000 audit[3128]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.267000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffd7297930 a2=0 a3=7fffd729791c items=0 ppid=3004 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:35:13.270000 audit[3130]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.270000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe7fab6110 a2=0 a3=7ffe7fab60fc items=0 ppid=3004 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.270000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:35:13.277000 audit[3134]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.277000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd4119d4c0 a2=0 a3=7ffd4119d4ac items=0 ppid=3004 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:35:13.278000 audit[3135]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.278000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0e1abad0 a2=0 a3=7ffd0e1ababc items=0 ppid=3004 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:35:13.281000 audit[3142]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:35:13.281000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff40218bb0 a2=0 a3=7fff40218b9c items=0 ppid=3004 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:35:13.289987 containerd[1677]: time="2026-01-24T00:35:13.289950750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g7hzl,Uid:4db2676b-40da-429b-8198-8875a9e56f27,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f1a29b6028b971af70e75ce7fb9b76d552050963087f1b0cd716e711e63775b8\"" Jan 24 00:35:13.292982 containerd[1677]: time="2026-01-24T00:35:13.292885006Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:35:13.310000 audit[3148]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:13.310000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe2e6b39c0 a2=0 a3=7ffe2e6b39ac items=0 ppid=3004 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:13.316000 audit[3148]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:13.316000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe2e6b39c0 a2=0 a3=7ffe2e6b39ac items=0 ppid=3004 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:13.318000 audit[3153]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.318000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffffd7b5970 a2=0 a3=7ffffd7b595c items=0 ppid=3004 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:35:13.321000 audit[3155]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.321000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc9666dbb0 a2=0 a3=7ffc9666db9c items=0 ppid=3004 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.321000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 24 00:35:13.325000 audit[3158]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.325000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffa3d35ac0 a2=0 a3=7fffa3d35aac items=0 ppid=3004 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.325000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 24 00:35:13.326000 audit[3159]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.326000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd131efdf0 a2=0 a3=7ffd131efddc items=0 ppid=3004 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.326000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:35:13.328000 audit[3161]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.328000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcedc4cb10 a2=0 a3=7ffcedc4cafc items=0 ppid=3004 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:35:13.330000 audit[3162]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.330000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff83900780 a2=0 a3=7fff8390076c items=0 ppid=3004 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:35:13.332000 audit[3164]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.332000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4c355b00 a2=0 a3=7ffd4c355aec items=0 ppid=3004 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 24 00:35:13.336000 audit[3167]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.336000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd84cb1370 a2=0 a3=7ffd84cb135c items=0 ppid=3004 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 24 00:35:13.337000 audit[3168]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.337000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0a8e21d0 a2=0 a3=7ffd0a8e21bc items=0 ppid=3004 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:35:13.339000 audit[3170]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.339000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd67a8d0d0 a2=0 a3=7ffd67a8d0bc items=0 ppid=3004 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:35:13.341000 audit[3171]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.341000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff18310be0 a2=0 a3=7fff18310bcc items=0 ppid=3004 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:35:13.343000 audit[3173]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.343000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9c34b6b0 a2=0 a3=7fff9c34b69c items=0 ppid=3004 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.343000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:35:13.346000 audit[3176]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.346000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff8a267e0 a2=0 a3=7ffff8a267cc items=0 ppid=3004 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 24 00:35:13.351000 audit[3179]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.351000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0d23a940 a2=0 a3=7ffe0d23a92c items=0 ppid=3004 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 24 00:35:13.352000 audit[3180]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.352000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffee4446770 a2=0 a3=7ffee444675c items=0 ppid=3004 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.352000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:35:13.355000 audit[3182]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.355000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd3329cfd0 a2=0 a3=7ffd3329cfbc items=0 ppid=3004 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:35:13.358000 audit[3185]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.358000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe963592a0 a2=0 a3=7ffe9635928c items=0 ppid=3004 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.358000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:35:13.359000 audit[3186]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.359000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda538ce30 a2=0 a3=7ffda538ce1c items=0 ppid=3004 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:35:13.362000 audit[3188]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.362000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff73419590 a2=0 a3=7fff7341957c items=0 ppid=3004 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.362000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:35:13.363000 audit[3189]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.363000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9c29bb60 a2=0 a3=7ffe9c29bb4c items=0 ppid=3004 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.363000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:35:13.366000 audit[3191]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.366000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff30656430 a2=0 a3=7fff3065641c items=0 ppid=3004 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:35:13.369000 audit[3194]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:35:13.369000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe04991710 a2=0 a3=7ffe049916fc items=0 ppid=3004 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.369000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:35:13.372000 audit[3196]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:35:13.372000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe82294140 a2=0 a3=7ffe8229412c items=0 ppid=3004 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.372000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:13.372000 audit[3196]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:35:13.372000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe82294140 a2=0 a3=7ffe8229412c items=0 ppid=3004 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:13.372000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:13.725242 kubelet[2887]: I0124 00:35:13.724394 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8pm2g" podStartSLOduration=1.7242801939999999 podStartE2EDuration="1.724280194s" podCreationTimestamp="2026-01-24 00:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:35:13.72422916 +0000 UTC m=+8.145160466" watchObservedRunningTime="2026-01-24 00:35:13.724280194 +0000 UTC m=+8.145211522" Jan 24 00:35:15.063947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1416961421.mount: Deactivated successfully. Jan 24 00:35:15.614606 containerd[1677]: time="2026-01-24T00:35:15.614540619Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:15.618259 containerd[1677]: time="2026-01-24T00:35:15.618227907Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 24 00:35:15.619399 containerd[1677]: time="2026-01-24T00:35:15.619374987Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:15.623269 containerd[1677]: time="2026-01-24T00:35:15.623245678Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:15.624494 containerd[1677]: time="2026-01-24T00:35:15.623778373Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.330841348s" Jan 24 00:35:15.624494 containerd[1677]: time="2026-01-24T00:35:15.624306126Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:35:15.635084 containerd[1677]: time="2026-01-24T00:35:15.635004551Z" level=info msg="CreateContainer within sandbox \"f1a29b6028b971af70e75ce7fb9b76d552050963087f1b0cd716e711e63775b8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:35:15.648569 containerd[1677]: time="2026-01-24T00:35:15.647312842Z" level=info msg="Container 221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:35:15.659951 containerd[1677]: time="2026-01-24T00:35:15.659910030Z" level=info msg="CreateContainer within sandbox \"f1a29b6028b971af70e75ce7fb9b76d552050963087f1b0cd716e711e63775b8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701\"" Jan 24 00:35:15.660522 containerd[1677]: time="2026-01-24T00:35:15.660493273Z" level=info msg="StartContainer for \"221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701\"" Jan 24 00:35:15.664223 containerd[1677]: time="2026-01-24T00:35:15.663323264Z" level=info msg="connecting to shim 221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701" address="unix:///run/containerd/s/47d84dd0a6390badaa3a9edfb56876fa8b14b74c62046063d10dbce964cca601" protocol=ttrpc version=3 Jan 24 00:35:15.700466 systemd[1]: Started cri-containerd-221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701.scope - libcontainer container 221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701. Jan 24 00:35:15.714000 audit: BPF prog-id=144 op=LOAD Jan 24 00:35:15.715000 audit: BPF prog-id=145 op=LOAD Jan 24 00:35:15.715000 audit[3205]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3070 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:15.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232316635613433643435333138323462653465663938356237303563 Jan 24 00:35:15.715000 audit: BPF prog-id=145 op=UNLOAD Jan 24 00:35:15.715000 audit[3205]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3070 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:15.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232316635613433643435333138323462653465663938356237303563 Jan 24 00:35:15.716000 audit: BPF prog-id=146 op=LOAD Jan 24 00:35:15.716000 audit[3205]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3070 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:15.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232316635613433643435333138323462653465663938356237303563 Jan 24 00:35:15.716000 audit: BPF prog-id=147 op=LOAD Jan 24 00:35:15.716000 audit[3205]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3070 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:15.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232316635613433643435333138323462653465663938356237303563 Jan 24 00:35:15.716000 audit: BPF prog-id=147 op=UNLOAD Jan 24 00:35:15.716000 audit[3205]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3070 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:15.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232316635613433643435333138323462653465663938356237303563 Jan 24 00:35:15.716000 audit: BPF prog-id=146 op=UNLOAD Jan 24 00:35:15.716000 audit[3205]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3070 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:15.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232316635613433643435333138323462653465663938356237303563 Jan 24 00:35:15.717000 audit: BPF prog-id=148 op=LOAD Jan 24 00:35:15.717000 audit[3205]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3070 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:15.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232316635613433643435333138323462653465663938356237303563 Jan 24 00:35:15.768508 containerd[1677]: time="2026-01-24T00:35:15.768324736Z" level=info msg="StartContainer for \"221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701\" returns successfully" Jan 24 00:35:16.741176 kubelet[2887]: I0124 00:35:16.741108 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-g7hzl" podStartSLOduration=2.400670517 podStartE2EDuration="4.741087946s" podCreationTimestamp="2026-01-24 00:35:12 +0000 UTC" firstStartedPulling="2026-01-24 00:35:13.292360319 +0000 UTC m=+7.713291625" lastFinishedPulling="2026-01-24 00:35:15.632777749 +0000 UTC m=+10.053709054" observedRunningTime="2026-01-24 00:35:16.740804296 +0000 UTC m=+11.161735655" watchObservedRunningTime="2026-01-24 00:35:16.741087946 +0000 UTC m=+11.162019570" Jan 24 00:35:21.205595 sudo[1950]: pam_unix(sudo:session): session closed for user root Jan 24 00:35:21.204000 audit[1950]: USER_END pid=1950 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:35:21.206686 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 24 00:35:21.206717 kernel: audit: type=1106 audit(1769214921.204:513): pid=1950 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:35:21.209000 audit[1950]: CRED_DISP pid=1950 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:35:21.214232 kernel: audit: type=1104 audit(1769214921.209:514): pid=1950 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:35:21.305280 sshd[1949]: Connection closed by 4.153.228.146 port 59436 Jan 24 00:35:21.307375 sshd-session[1945]: pam_unix(sshd:session): session closed for user core Jan 24 00:35:21.309000 audit[1945]: USER_END pid=1945 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:35:21.316226 kernel: audit: type=1106 audit(1769214921.309:515): pid=1945 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:35:21.317276 systemd[1]: sshd@8-10.0.1.115:22-4.153.228.146:59436.service: Deactivated successfully. Jan 24 00:35:21.319081 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:35:21.309000 audit[1945]: CRED_DISP pid=1945 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:35:21.323347 systemd[1]: session-10.scope: Consumed 4.474s CPU time, 228.1M memory peak. Jan 24 00:35:21.325224 kernel: audit: type=1104 audit(1769214921.309:516): pid=1945 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:35:21.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.1.115:22-4.153.228.146:59436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:35:21.328502 systemd-logind[1654]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:35:21.329245 kernel: audit: type=1131 audit(1769214921.316:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.1.115:22-4.153.228.146:59436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:35:21.329400 systemd-logind[1654]: Removed session 10. Jan 24 00:35:21.960000 audit[3286]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:21.966222 kernel: audit: type=1325 audit(1769214921.960:518): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:21.960000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe61313330 a2=0 a3=7ffe6131331c items=0 ppid=3004 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:21.972230 kernel: audit: type=1300 audit(1769214921.960:518): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe61313330 a2=0 a3=7ffe6131331c items=0 ppid=3004 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:21.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:21.978272 kernel: audit: type=1327 audit(1769214921.960:518): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:21.966000 audit[3286]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:21.982228 kernel: audit: type=1325 audit(1769214921.966:519): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:21.966000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe61313330 a2=0 a3=0 items=0 ppid=3004 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:21.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:21.989233 kernel: audit: type=1300 audit(1769214921.966:519): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe61313330 a2=0 a3=0 items=0 ppid=3004 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:21.998000 audit[3288]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:21.998000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff05d92dd0 a2=0 a3=7fff05d92dbc items=0 ppid=3004 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:21.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:22.002000 audit[3288]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:22.002000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff05d92dd0 a2=0 a3=0 items=0 ppid=3004 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:22.002000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:24.012000 audit[3290]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:24.012000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe27084c80 a2=0 a3=7ffe27084c6c items=0 ppid=3004 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:24.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:24.016000 audit[3290]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:24.016000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe27084c80 a2=0 a3=0 items=0 ppid=3004 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:24.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:24.028000 audit[3292]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:24.028000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffefc5562e0 a2=0 a3=7ffefc5562cc items=0 ppid=3004 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:24.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:24.033000 audit[3292]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:24.033000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffefc5562e0 a2=0 a3=0 items=0 ppid=3004 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:24.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:25.049000 audit[3294]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:25.049000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe3ea58770 a2=0 a3=7ffe3ea5875c items=0 ppid=3004 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:25.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:25.054000 audit[3294]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:25.054000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe3ea58770 a2=0 a3=0 items=0 ppid=3004 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:25.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:25.706565 systemd[1]: Created slice kubepods-besteffort-podb874e951_47cf_4441_a43d_dcb906cc0877.slice - libcontainer container kubepods-besteffort-podb874e951_47cf_4441_a43d_dcb906cc0877.slice. Jan 24 00:35:25.781626 kubelet[2887]: I0124 00:35:25.781450 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b874e951-47cf-4441-a43d-dcb906cc0877-tigera-ca-bundle\") pod \"calico-typha-54559b7474-s26mc\" (UID: \"b874e951-47cf-4441-a43d-dcb906cc0877\") " pod="calico-system/calico-typha-54559b7474-s26mc" Jan 24 00:35:25.781626 kubelet[2887]: I0124 00:35:25.781489 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b874e951-47cf-4441-a43d-dcb906cc0877-typha-certs\") pod \"calico-typha-54559b7474-s26mc\" (UID: \"b874e951-47cf-4441-a43d-dcb906cc0877\") " pod="calico-system/calico-typha-54559b7474-s26mc" Jan 24 00:35:25.781626 kubelet[2887]: I0124 00:35:25.781510 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlr95\" (UniqueName: \"kubernetes.io/projected/b874e951-47cf-4441-a43d-dcb906cc0877-kube-api-access-xlr95\") pod \"calico-typha-54559b7474-s26mc\" (UID: \"b874e951-47cf-4441-a43d-dcb906cc0877\") " pod="calico-system/calico-typha-54559b7474-s26mc" Jan 24 00:35:25.873602 systemd[1]: Created slice kubepods-besteffort-podf99f2681_2d99_41c4_bc78_961762b74b6c.slice - libcontainer container kubepods-besteffort-podf99f2681_2d99_41c4_bc78_961762b74b6c.slice. Jan 24 00:35:25.982727 kubelet[2887]: I0124 00:35:25.982605 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f99f2681-2d99-41c4-bc78-961762b74b6c-var-run-calico\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:25.982727 kubelet[2887]: I0124 00:35:25.982661 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plzqs\" (UniqueName: \"kubernetes.io/projected/f99f2681-2d99-41c4-bc78-961762b74b6c-kube-api-access-plzqs\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:25.982727 kubelet[2887]: I0124 00:35:25.982691 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f99f2681-2d99-41c4-bc78-961762b74b6c-cni-net-dir\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:25.982727 kubelet[2887]: I0124 00:35:25.982723 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f99f2681-2d99-41c4-bc78-961762b74b6c-tigera-ca-bundle\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:25.982902 kubelet[2887]: I0124 00:35:25.982748 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f99f2681-2d99-41c4-bc78-961762b74b6c-flexvol-driver-host\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:25.982902 kubelet[2887]: I0124 00:35:25.982769 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f99f2681-2d99-41c4-bc78-961762b74b6c-lib-modules\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:25.982902 kubelet[2887]: I0124 00:35:25.982793 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f99f2681-2d99-41c4-bc78-961762b74b6c-xtables-lock\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:25.982902 kubelet[2887]: I0124 00:35:25.982816 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f99f2681-2d99-41c4-bc78-961762b74b6c-node-certs\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:25.982902 kubelet[2887]: I0124 00:35:25.982838 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f99f2681-2d99-41c4-bc78-961762b74b6c-var-lib-calico\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:25.983000 kubelet[2887]: I0124 00:35:25.982863 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f99f2681-2d99-41c4-bc78-961762b74b6c-cni-log-dir\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:25.983000 kubelet[2887]: I0124 00:35:25.982883 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f99f2681-2d99-41c4-bc78-961762b74b6c-cni-bin-dir\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:25.983000 kubelet[2887]: I0124 00:35:25.982902 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f99f2681-2d99-41c4-bc78-961762b74b6c-policysync\") pod \"calico-node-9gf56\" (UID: \"f99f2681-2d99-41c4-bc78-961762b74b6c\") " pod="calico-system/calico-node-9gf56" Jan 24 00:35:26.019470 containerd[1677]: time="2026-01-24T00:35:26.019379147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54559b7474-s26mc,Uid:b874e951-47cf-4441-a43d-dcb906cc0877,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:26.041645 containerd[1677]: time="2026-01-24T00:35:26.041335808Z" level=info msg="connecting to shim e100483d3a2844e14cf2e41b5cca1078f1f92289b2fd23e4ea669c53639555f7" address="unix:///run/containerd/s/bb5f83fcfbc90630eed237a5efdd5894edbdab2a0b046577477c35870f64f496" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:26.068000 audit[3331]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:26.068000 audit[3331]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff189b28a0 a2=0 a3=7fff189b288c items=0 ppid=3004 pid=3331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:26.078387 systemd[1]: Started cri-containerd-e100483d3a2844e14cf2e41b5cca1078f1f92289b2fd23e4ea669c53639555f7.scope - libcontainer container e100483d3a2844e14cf2e41b5cca1078f1f92289b2fd23e4ea669c53639555f7. Jan 24 00:35:26.082169 kubelet[2887]: E0124 00:35:26.082134 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:35:26.082000 audit[3331]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:26.082000 audit[3331]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff189b28a0 a2=0 a3=0 items=0 ppid=3004 pid=3331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.082000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:26.085063 kubelet[2887]: E0124 00:35:26.085041 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.085063 kubelet[2887]: W0124 00:35:26.085056 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.086421 kubelet[2887]: E0124 00:35:26.085080 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.091541 kubelet[2887]: E0124 00:35:26.091526 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.091654 kubelet[2887]: W0124 00:35:26.091614 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.091654 kubelet[2887]: E0124 00:35:26.091631 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.102152 kubelet[2887]: E0124 00:35:26.102132 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.102152 kubelet[2887]: W0124 00:35:26.102148 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.102288 kubelet[2887]: E0124 00:35:26.102163 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.110000 audit: BPF prog-id=149 op=LOAD Jan 24 00:35:26.110000 audit: BPF prog-id=150 op=LOAD Jan 24 00:35:26.110000 audit[3317]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3307 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303034383364336132383434653134636632653431623563636131 Jan 24 00:35:26.110000 audit: BPF prog-id=150 op=UNLOAD Jan 24 00:35:26.110000 audit[3317]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303034383364336132383434653134636632653431623563636131 Jan 24 00:35:26.111000 audit: BPF prog-id=151 op=LOAD Jan 24 00:35:26.111000 audit[3317]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3307 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303034383364336132383434653134636632653431623563636131 Jan 24 00:35:26.111000 audit: BPF prog-id=152 op=LOAD Jan 24 00:35:26.111000 audit[3317]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3307 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303034383364336132383434653134636632653431623563636131 Jan 24 00:35:26.111000 audit: BPF prog-id=152 op=UNLOAD Jan 24 00:35:26.111000 audit[3317]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303034383364336132383434653134636632653431623563636131 Jan 24 00:35:26.111000 audit: BPF prog-id=151 op=UNLOAD Jan 24 00:35:26.111000 audit[3317]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303034383364336132383434653134636632653431623563636131 Jan 24 00:35:26.111000 audit: BPF prog-id=153 op=LOAD Jan 24 00:35:26.111000 audit[3317]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3307 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303034383364336132383434653134636632653431623563636131 Jan 24 00:35:26.158159 containerd[1677]: time="2026-01-24T00:35:26.158122792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54559b7474-s26mc,Uid:b874e951-47cf-4441-a43d-dcb906cc0877,Namespace:calico-system,Attempt:0,} returns sandbox id \"e100483d3a2844e14cf2e41b5cca1078f1f92289b2fd23e4ea669c53639555f7\"" Jan 24 00:35:26.161483 containerd[1677]: time="2026-01-24T00:35:26.161440670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:35:26.176614 containerd[1677]: time="2026-01-24T00:35:26.176583168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9gf56,Uid:f99f2681-2d99-41c4-bc78-961762b74b6c,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:26.184440 kubelet[2887]: E0124 00:35:26.183618 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.184440 kubelet[2887]: W0124 00:35:26.184442 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.184589 kubelet[2887]: E0124 00:35:26.184461 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.184589 kubelet[2887]: I0124 00:35:26.184491 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe469055-bc9a-468d-9724-6bf26a67fb3d-registration-dir\") pod \"csi-node-driver-x8bpc\" (UID: \"fe469055-bc9a-468d-9724-6bf26a67fb3d\") " pod="calico-system/csi-node-driver-x8bpc" Jan 24 00:35:26.184814 kubelet[2887]: E0124 00:35:26.184801 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.184814 kubelet[2887]: W0124 00:35:26.184813 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.185057 kubelet[2887]: E0124 00:35:26.185046 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.185094 kubelet[2887]: W0124 00:35:26.185057 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.185094 kubelet[2887]: E0124 00:35:26.185066 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.185669 kubelet[2887]: E0124 00:35:26.184854 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.185669 kubelet[2887]: I0124 00:35:26.185640 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe469055-bc9a-468d-9724-6bf26a67fb3d-socket-dir\") pod \"csi-node-driver-x8bpc\" (UID: \"fe469055-bc9a-468d-9724-6bf26a67fb3d\") " pod="calico-system/csi-node-driver-x8bpc" Jan 24 00:35:26.185766 kubelet[2887]: E0124 00:35:26.185755 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.185794 kubelet[2887]: W0124 00:35:26.185770 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.185794 kubelet[2887]: E0124 00:35:26.185783 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.186271 kubelet[2887]: E0124 00:35:26.186245 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.186489 kubelet[2887]: W0124 00:35:26.186257 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.186489 kubelet[2887]: E0124 00:35:26.186460 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.186762 kubelet[2887]: E0124 00:35:26.186751 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.186762 kubelet[2887]: W0124 00:35:26.186761 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.186852 kubelet[2887]: E0124 00:35:26.186841 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.188412 kubelet[2887]: E0124 00:35:26.188252 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.188412 kubelet[2887]: W0124 00:35:26.188263 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.188412 kubelet[2887]: E0124 00:35:26.188272 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.188412 kubelet[2887]: I0124 00:35:26.188296 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe469055-bc9a-468d-9724-6bf26a67fb3d-kubelet-dir\") pod \"csi-node-driver-x8bpc\" (UID: \"fe469055-bc9a-468d-9724-6bf26a67fb3d\") " pod="calico-system/csi-node-driver-x8bpc" Jan 24 00:35:26.188530 kubelet[2887]: E0124 00:35:26.188443 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.188530 kubelet[2887]: W0124 00:35:26.188450 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.188530 kubelet[2887]: E0124 00:35:26.188456 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.188587 kubelet[2887]: I0124 00:35:26.188469 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fe469055-bc9a-468d-9724-6bf26a67fb3d-varrun\") pod \"csi-node-driver-x8bpc\" (UID: \"fe469055-bc9a-468d-9724-6bf26a67fb3d\") " pod="calico-system/csi-node-driver-x8bpc" Jan 24 00:35:26.188660 kubelet[2887]: E0124 00:35:26.188615 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.188660 kubelet[2887]: W0124 00:35:26.188633 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.188660 kubelet[2887]: E0124 00:35:26.188647 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.188822 kubelet[2887]: E0124 00:35:26.188812 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.188854 kubelet[2887]: W0124 00:35:26.188824 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.188854 kubelet[2887]: E0124 00:35:26.188834 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.188978 kubelet[2887]: E0124 00:35:26.188969 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.188978 kubelet[2887]: W0124 00:35:26.188977 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.189030 kubelet[2887]: E0124 00:35:26.188988 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.189030 kubelet[2887]: I0124 00:35:26.189001 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gqvm\" (UniqueName: \"kubernetes.io/projected/fe469055-bc9a-468d-9724-6bf26a67fb3d-kube-api-access-4gqvm\") pod \"csi-node-driver-x8bpc\" (UID: \"fe469055-bc9a-468d-9724-6bf26a67fb3d\") " pod="calico-system/csi-node-driver-x8bpc" Jan 24 00:35:26.189294 kubelet[2887]: E0124 00:35:26.189277 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.189294 kubelet[2887]: W0124 00:35:26.189287 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.189353 kubelet[2887]: E0124 00:35:26.189300 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.189478 kubelet[2887]: E0124 00:35:26.189442 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.189478 kubelet[2887]: W0124 00:35:26.189447 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.189478 kubelet[2887]: E0124 00:35:26.189458 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.189716 kubelet[2887]: E0124 00:35:26.189668 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.189716 kubelet[2887]: W0124 00:35:26.189674 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.189716 kubelet[2887]: E0124 00:35:26.189681 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.189956 kubelet[2887]: E0124 00:35:26.189840 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.189956 kubelet[2887]: W0124 00:35:26.189846 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.189956 kubelet[2887]: E0124 00:35:26.189852 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.200224 containerd[1677]: time="2026-01-24T00:35:26.200052430Z" level=info msg="connecting to shim e0e9ab8876325fa8b8f077171d715cf84dd568927c09990dc961bf7513640acc" address="unix:///run/containerd/s/4ac161e2d5363b0982a62ba85fc562c3aee6ae98a4eba2d0855dd407514c25be" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:26.221483 systemd[1]: Started cri-containerd-e0e9ab8876325fa8b8f077171d715cf84dd568927c09990dc961bf7513640acc.scope - libcontainer container e0e9ab8876325fa8b8f077171d715cf84dd568927c09990dc961bf7513640acc. Jan 24 00:35:26.230000 audit: BPF prog-id=154 op=LOAD Jan 24 00:35:26.232788 kernel: kauditd_printk_skb: 53 callbacks suppressed Jan 24 00:35:26.232830 kernel: audit: type=1334 audit(1769214926.230:538): prog-id=154 op=LOAD Jan 24 00:35:26.233000 audit: BPF prog-id=155 op=LOAD Jan 24 00:35:26.237027 kernel: audit: type=1334 audit(1769214926.233:539): prog-id=155 op=LOAD Jan 24 00:35:26.237055 kernel: audit: type=1300 audit(1769214926.233:539): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3382 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.233000 audit[3392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3382 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653961623838373633323566613862386630373731373164373135 Jan 24 00:35:26.246226 kernel: audit: type=1327 audit(1769214926.233:539): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653961623838373633323566613862386630373731373164373135 Jan 24 00:35:26.235000 audit: BPF prog-id=155 op=UNLOAD Jan 24 00:35:26.235000 audit[3392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3382 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.250393 kernel: audit: type=1334 audit(1769214926.235:540): prog-id=155 op=UNLOAD Jan 24 00:35:26.250432 kernel: audit: type=1300 audit(1769214926.235:540): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3382 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653961623838373633323566613862386630373731373164373135 Jan 24 00:35:26.236000 audit: BPF prog-id=156 op=LOAD Jan 24 00:35:26.258866 kernel: audit: type=1327 audit(1769214926.235:540): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653961623838373633323566613862386630373731373164373135 Jan 24 00:35:26.258895 kernel: audit: type=1334 audit(1769214926.236:541): prog-id=156 op=LOAD Jan 24 00:35:26.236000 audit[3392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3382 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.261416 kernel: audit: type=1300 audit(1769214926.236:541): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3382 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653961623838373633323566613862386630373731373164373135 Jan 24 00:35:26.265649 kernel: audit: type=1327 audit(1769214926.236:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653961623838373633323566613862386630373731373164373135 Jan 24 00:35:26.267821 containerd[1677]: time="2026-01-24T00:35:26.267778195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9gf56,Uid:f99f2681-2d99-41c4-bc78-961762b74b6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0e9ab8876325fa8b8f077171d715cf84dd568927c09990dc961bf7513640acc\"" Jan 24 00:35:26.236000 audit: BPF prog-id=157 op=LOAD Jan 24 00:35:26.236000 audit[3392]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3382 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653961623838373633323566613862386630373731373164373135 Jan 24 00:35:26.236000 audit: BPF prog-id=157 op=UNLOAD Jan 24 00:35:26.236000 audit[3392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3382 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653961623838373633323566613862386630373731373164373135 Jan 24 00:35:26.236000 audit: BPF prog-id=156 op=UNLOAD Jan 24 00:35:26.236000 audit[3392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3382 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653961623838373633323566613862386630373731373164373135 Jan 24 00:35:26.236000 audit: BPF prog-id=158 op=LOAD Jan 24 00:35:26.236000 audit[3392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3382 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:26.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653961623838373633323566613862386630373731373164373135 Jan 24 00:35:26.290014 kubelet[2887]: E0124 00:35:26.289950 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.290202 kubelet[2887]: W0124 00:35:26.290103 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.290202 kubelet[2887]: E0124 00:35:26.290123 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.290442 kubelet[2887]: E0124 00:35:26.290428 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.290619 kubelet[2887]: W0124 00:35:26.290545 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.290619 kubelet[2887]: E0124 00:35:26.290564 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.290955 kubelet[2887]: E0124 00:35:26.290926 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.291143 kubelet[2887]: W0124 00:35:26.291068 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.291143 kubelet[2887]: E0124 00:35:26.291086 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.291331 kubelet[2887]: E0124 00:35:26.291324 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.291378 kubelet[2887]: W0124 00:35:26.291358 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.291470 kubelet[2887]: E0124 00:35:26.291419 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.291607 kubelet[2887]: E0124 00:35:26.291601 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.291652 kubelet[2887]: W0124 00:35:26.291647 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.291721 kubelet[2887]: E0124 00:35:26.291694 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.291857 kubelet[2887]: E0124 00:35:26.291841 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.291857 kubelet[2887]: W0124 00:35:26.291848 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.291950 kubelet[2887]: E0124 00:35:26.291933 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.292101 kubelet[2887]: E0124 00:35:26.292084 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.292101 kubelet[2887]: W0124 00:35:26.292090 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.292250 kubelet[2887]: E0124 00:35:26.292230 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.292451 kubelet[2887]: E0124 00:35:26.292396 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.292451 kubelet[2887]: W0124 00:35:26.292403 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.292451 kubelet[2887]: E0124 00:35:26.292414 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.292626 kubelet[2887]: E0124 00:35:26.292620 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.292753 kubelet[2887]: W0124 00:35:26.292677 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.292753 kubelet[2887]: E0124 00:35:26.292691 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.292849 kubelet[2887]: E0124 00:35:26.292844 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.292895 kubelet[2887]: W0124 00:35:26.292890 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.292948 kubelet[2887]: E0124 00:35:26.292936 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.293137 kubelet[2887]: E0124 00:35:26.293106 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.293177 kubelet[2887]: W0124 00:35:26.293171 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.293320 kubelet[2887]: E0124 00:35:26.293313 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.293450 kubelet[2887]: E0124 00:35:26.293413 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.293450 kubelet[2887]: W0124 00:35:26.293420 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.293450 kubelet[2887]: E0124 00:35:26.293436 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.293657 kubelet[2887]: E0124 00:35:26.293612 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.293657 kubelet[2887]: W0124 00:35:26.293629 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.293657 kubelet[2887]: E0124 00:35:26.293646 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.293884 kubelet[2887]: E0124 00:35:26.293822 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.293884 kubelet[2887]: W0124 00:35:26.293829 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.293884 kubelet[2887]: E0124 00:35:26.293842 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.294047 kubelet[2887]: E0124 00:35:26.294042 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.294127 kubelet[2887]: W0124 00:35:26.294078 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.294127 kubelet[2887]: E0124 00:35:26.294090 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.294337 kubelet[2887]: E0124 00:35:26.294330 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.294409 kubelet[2887]: W0124 00:35:26.294370 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.294409 kubelet[2887]: E0124 00:35:26.294383 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.294615 kubelet[2887]: E0124 00:35:26.294543 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.294615 kubelet[2887]: W0124 00:35:26.294549 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.294615 kubelet[2887]: E0124 00:35:26.294559 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.294792 kubelet[2887]: E0124 00:35:26.294786 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.294855 kubelet[2887]: W0124 00:35:26.294826 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.294855 kubelet[2887]: E0124 00:35:26.294842 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.295025 kubelet[2887]: E0124 00:35:26.294991 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.295025 kubelet[2887]: W0124 00:35:26.294998 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.295025 kubelet[2887]: E0124 00:35:26.295011 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.295183 kubelet[2887]: E0124 00:35:26.295178 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.295269 kubelet[2887]: W0124 00:35:26.295239 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.295269 kubelet[2887]: E0124 00:35:26.295254 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.295459 kubelet[2887]: E0124 00:35:26.295407 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.295459 kubelet[2887]: W0124 00:35:26.295414 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.295459 kubelet[2887]: E0124 00:35:26.295427 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.295640 kubelet[2887]: E0124 00:35:26.295634 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.295743 kubelet[2887]: W0124 00:35:26.295671 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.295743 kubelet[2887]: E0124 00:35:26.295688 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.295904 kubelet[2887]: E0124 00:35:26.295899 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.295938 kubelet[2887]: W0124 00:35:26.295933 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.295985 kubelet[2887]: E0124 00:35:26.295978 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.296182 kubelet[2887]: E0124 00:35:26.296175 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.296324 kubelet[2887]: W0124 00:35:26.296238 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.296324 kubelet[2887]: E0124 00:35:26.296250 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.296487 kubelet[2887]: E0124 00:35:26.296475 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.296513 kubelet[2887]: W0124 00:35:26.296486 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.296513 kubelet[2887]: E0124 00:35:26.296496 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:26.304391 kubelet[2887]: E0124 00:35:26.304379 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:26.304504 kubelet[2887]: W0124 00:35:26.304430 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:26.304504 kubelet[2887]: E0124 00:35:26.304442 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:27.653455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293732790.mount: Deactivated successfully. Jan 24 00:35:27.655237 kubelet[2887]: E0124 00:35:27.654759 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:35:28.924824 containerd[1677]: time="2026-01-24T00:35:28.924747137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:28.927451 containerd[1677]: time="2026-01-24T00:35:28.927419585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 24 00:35:28.929248 containerd[1677]: time="2026-01-24T00:35:28.928859040Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:28.931602 containerd[1677]: time="2026-01-24T00:35:28.931579766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:28.932735 containerd[1677]: time="2026-01-24T00:35:28.932464350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.77097855s" Jan 24 00:35:28.932735 containerd[1677]: time="2026-01-24T00:35:28.932489693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:35:28.934854 containerd[1677]: time="2026-01-24T00:35:28.934837319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:35:28.950443 containerd[1677]: time="2026-01-24T00:35:28.950256072Z" level=info msg="CreateContainer within sandbox \"e100483d3a2844e14cf2e41b5cca1078f1f92289b2fd23e4ea669c53639555f7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:35:28.960749 containerd[1677]: time="2026-01-24T00:35:28.960723460Z" level=info msg="Container 62f8e178536cfbae95f6e29bd8b700fd855431da5d6ba981342eb8308d26627e: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:35:28.972870 containerd[1677]: time="2026-01-24T00:35:28.972836475Z" level=info msg="CreateContainer within sandbox \"e100483d3a2844e14cf2e41b5cca1078f1f92289b2fd23e4ea669c53639555f7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"62f8e178536cfbae95f6e29bd8b700fd855431da5d6ba981342eb8308d26627e\"" Jan 24 00:35:28.973975 containerd[1677]: time="2026-01-24T00:35:28.973844726Z" level=info msg="StartContainer for \"62f8e178536cfbae95f6e29bd8b700fd855431da5d6ba981342eb8308d26627e\"" Jan 24 00:35:28.975522 containerd[1677]: time="2026-01-24T00:35:28.975489733Z" level=info msg="connecting to shim 62f8e178536cfbae95f6e29bd8b700fd855431da5d6ba981342eb8308d26627e" address="unix:///run/containerd/s/bb5f83fcfbc90630eed237a5efdd5894edbdab2a0b046577477c35870f64f496" protocol=ttrpc version=3 Jan 24 00:35:29.000393 systemd[1]: Started cri-containerd-62f8e178536cfbae95f6e29bd8b700fd855431da5d6ba981342eb8308d26627e.scope - libcontainer container 62f8e178536cfbae95f6e29bd8b700fd855431da5d6ba981342eb8308d26627e. Jan 24 00:35:29.010000 audit: BPF prog-id=159 op=LOAD Jan 24 00:35:29.011000 audit: BPF prog-id=160 op=LOAD Jan 24 00:35:29.011000 audit[3453]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3307 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:29.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663865313738353336636662616539356636653239626438623730 Jan 24 00:35:29.011000 audit: BPF prog-id=160 op=UNLOAD Jan 24 00:35:29.011000 audit[3453]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:29.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663865313738353336636662616539356636653239626438623730 Jan 24 00:35:29.011000 audit: BPF prog-id=161 op=LOAD Jan 24 00:35:29.011000 audit[3453]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3307 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:29.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663865313738353336636662616539356636653239626438623730 Jan 24 00:35:29.011000 audit: BPF prog-id=162 op=LOAD Jan 24 00:35:29.011000 audit[3453]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3307 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:29.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663865313738353336636662616539356636653239626438623730 Jan 24 00:35:29.011000 audit: BPF prog-id=162 op=UNLOAD Jan 24 00:35:29.011000 audit[3453]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:29.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663865313738353336636662616539356636653239626438623730 Jan 24 00:35:29.011000 audit: BPF prog-id=161 op=UNLOAD Jan 24 00:35:29.011000 audit[3453]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:29.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663865313738353336636662616539356636653239626438623730 Jan 24 00:35:29.011000 audit: BPF prog-id=163 op=LOAD Jan 24 00:35:29.011000 audit[3453]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3307 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:29.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663865313738353336636662616539356636653239626438623730 Jan 24 00:35:29.051385 containerd[1677]: time="2026-01-24T00:35:29.051316268Z" level=info msg="StartContainer for \"62f8e178536cfbae95f6e29bd8b700fd855431da5d6ba981342eb8308d26627e\" returns successfully" Jan 24 00:35:29.655806 kubelet[2887]: E0124 00:35:29.655484 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:35:29.766227 kubelet[2887]: I0124 00:35:29.765916 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54559b7474-s26mc" podStartSLOduration=1.992629746 podStartE2EDuration="4.765902823s" podCreationTimestamp="2026-01-24 00:35:25 +0000 UTC" firstStartedPulling="2026-01-24 00:35:26.160754618 +0000 UTC m=+20.581685923" lastFinishedPulling="2026-01-24 00:35:28.934027693 +0000 UTC m=+23.354959000" observedRunningTime="2026-01-24 00:35:29.765683548 +0000 UTC m=+24.186614876" watchObservedRunningTime="2026-01-24 00:35:29.765902823 +0000 UTC m=+24.186834150" Jan 24 00:35:29.813466 kubelet[2887]: E0124 00:35:29.813371 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.813466 kubelet[2887]: W0124 00:35:29.813403 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.813466 kubelet[2887]: E0124 00:35:29.813422 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.813916 kubelet[2887]: E0124 00:35:29.813842 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.813916 kubelet[2887]: W0124 00:35:29.813853 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.813916 kubelet[2887]: E0124 00:35:29.813864 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.814191 kubelet[2887]: E0124 00:35:29.814156 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.814191 kubelet[2887]: W0124 00:35:29.814164 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.814191 kubelet[2887]: E0124 00:35:29.814173 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.814540 kubelet[2887]: E0124 00:35:29.814472 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.814540 kubelet[2887]: W0124 00:35:29.814480 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.814540 kubelet[2887]: E0124 00:35:29.814486 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.814773 kubelet[2887]: E0124 00:35:29.814755 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.814773 kubelet[2887]: W0124 00:35:29.814762 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.814911 kubelet[2887]: E0124 00:35:29.814833 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.815051 kubelet[2887]: E0124 00:35:29.815045 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.815127 kubelet[2887]: W0124 00:35:29.815084 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.815127 kubelet[2887]: E0124 00:35:29.815102 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.815319 kubelet[2887]: E0124 00:35:29.815309 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.815391 kubelet[2887]: W0124 00:35:29.815354 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.815391 kubelet[2887]: E0124 00:35:29.815363 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.815594 kubelet[2887]: E0124 00:35:29.815559 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.815594 kubelet[2887]: W0124 00:35:29.815566 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.815594 kubelet[2887]: E0124 00:35:29.815573 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.815929 kubelet[2887]: E0124 00:35:29.815912 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.815978 kubelet[2887]: W0124 00:35:29.815929 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.815978 kubelet[2887]: E0124 00:35:29.815942 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.816077 kubelet[2887]: E0124 00:35:29.816068 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.816077 kubelet[2887]: W0124 00:35:29.816076 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.816141 kubelet[2887]: E0124 00:35:29.816083 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.816221 kubelet[2887]: E0124 00:35:29.816199 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.816248 kubelet[2887]: W0124 00:35:29.816218 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.816248 kubelet[2887]: E0124 00:35:29.816228 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.816350 kubelet[2887]: E0124 00:35:29.816341 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.816350 kubelet[2887]: W0124 00:35:29.816349 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.816395 kubelet[2887]: E0124 00:35:29.816355 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.816478 kubelet[2887]: E0124 00:35:29.816470 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.816504 kubelet[2887]: W0124 00:35:29.816477 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.816504 kubelet[2887]: E0124 00:35:29.816483 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.816593 kubelet[2887]: E0124 00:35:29.816584 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.816593 kubelet[2887]: W0124 00:35:29.816591 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.816637 kubelet[2887]: E0124 00:35:29.816597 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.816708 kubelet[2887]: E0124 00:35:29.816700 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.816708 kubelet[2887]: W0124 00:35:29.816708 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.816758 kubelet[2887]: E0124 00:35:29.816713 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.917270 kubelet[2887]: E0124 00:35:29.916736 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.917270 kubelet[2887]: W0124 00:35:29.916792 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.919263 kubelet[2887]: E0124 00:35:29.917187 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.919511 kubelet[2887]: E0124 00:35:29.919492 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.919586 kubelet[2887]: W0124 00:35:29.919573 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.919673 kubelet[2887]: E0124 00:35:29.919661 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.920094 kubelet[2887]: E0124 00:35:29.920066 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.920094 kubelet[2887]: W0124 00:35:29.920090 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.920180 kubelet[2887]: E0124 00:35:29.920125 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.920475 kubelet[2887]: E0124 00:35:29.920458 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.920521 kubelet[2887]: W0124 00:35:29.920474 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.920521 kubelet[2887]: E0124 00:35:29.920513 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.920843 kubelet[2887]: E0124 00:35:29.920827 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.920843 kubelet[2887]: W0124 00:35:29.920842 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.920908 kubelet[2887]: E0124 00:35:29.920880 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.921275 kubelet[2887]: E0124 00:35:29.921259 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.921326 kubelet[2887]: W0124 00:35:29.921275 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.921481 kubelet[2887]: E0124 00:35:29.921353 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.921526 kubelet[2887]: E0124 00:35:29.921494 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.921526 kubelet[2887]: W0124 00:35:29.921504 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.921864 kubelet[2887]: E0124 00:35:29.921791 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.921864 kubelet[2887]: W0124 00:35:29.921806 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.921864 kubelet[2887]: E0124 00:35:29.921844 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.922097 kubelet[2887]: E0124 00:35:29.922060 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.922097 kubelet[2887]: W0124 00:35:29.922079 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.922156 kubelet[2887]: E0124 00:35:29.922089 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.922428 kubelet[2887]: E0124 00:35:29.922389 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.922758 kubelet[2887]: E0124 00:35:29.922732 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.922817 kubelet[2887]: W0124 00:35:29.922778 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.922817 kubelet[2887]: E0124 00:35:29.922801 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.923197 kubelet[2887]: E0124 00:35:29.923179 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.923342 kubelet[2887]: W0124 00:35:29.923227 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.923342 kubelet[2887]: E0124 00:35:29.923247 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.923751 kubelet[2887]: E0124 00:35:29.923726 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.923751 kubelet[2887]: W0124 00:35:29.923741 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.924123 kubelet[2887]: E0124 00:35:29.923884 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.924123 kubelet[2887]: E0124 00:35:29.923942 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.924123 kubelet[2887]: W0124 00:35:29.923952 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.924451 kubelet[2887]: E0124 00:35:29.924166 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.924451 kubelet[2887]: W0124 00:35:29.924176 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.924451 kubelet[2887]: E0124 00:35:29.924187 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.924451 kubelet[2887]: E0124 00:35:29.924396 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.924451 kubelet[2887]: W0124 00:35:29.924405 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.924451 kubelet[2887]: E0124 00:35:29.924425 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.924930 kubelet[2887]: E0124 00:35:29.924403 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.924930 kubelet[2887]: E0124 00:35:29.924596 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.924930 kubelet[2887]: W0124 00:35:29.924605 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.924930 kubelet[2887]: E0124 00:35:29.924614 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.924930 kubelet[2887]: E0124 00:35:29.924824 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.924930 kubelet[2887]: W0124 00:35:29.924833 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.924930 kubelet[2887]: E0124 00:35:29.924842 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:29.925294 kubelet[2887]: E0124 00:35:29.925271 2887 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:35:29.925294 kubelet[2887]: W0124 00:35:29.925286 2887 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:35:29.925365 kubelet[2887]: E0124 00:35:29.925296 2887 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:35:30.418103 containerd[1677]: time="2026-01-24T00:35:30.418055398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:30.420247 containerd[1677]: time="2026-01-24T00:35:30.420221480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 24 00:35:30.423262 containerd[1677]: time="2026-01-24T00:35:30.423203257Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:30.426909 containerd[1677]: time="2026-01-24T00:35:30.426832618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:30.427799 containerd[1677]: time="2026-01-24T00:35:30.427704251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.49276896s" Jan 24 00:35:30.427799 containerd[1677]: time="2026-01-24T00:35:30.427727973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:35:30.429824 containerd[1677]: time="2026-01-24T00:35:30.429745634Z" level=info msg="CreateContainer within sandbox \"e0e9ab8876325fa8b8f077171d715cf84dd568927c09990dc961bf7513640acc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:35:30.444631 containerd[1677]: time="2026-01-24T00:35:30.444483353Z" level=info msg="Container cc81b29faa82aede42773988510cad10562bf73d063861ddfb6dbc81ba6de756: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:35:30.457541 containerd[1677]: time="2026-01-24T00:35:30.457502368Z" level=info msg="CreateContainer within sandbox \"e0e9ab8876325fa8b8f077171d715cf84dd568927c09990dc961bf7513640acc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cc81b29faa82aede42773988510cad10562bf73d063861ddfb6dbc81ba6de756\"" Jan 24 00:35:30.458079 containerd[1677]: time="2026-01-24T00:35:30.458057864Z" level=info msg="StartContainer for \"cc81b29faa82aede42773988510cad10562bf73d063861ddfb6dbc81ba6de756\"" Jan 24 00:35:30.459354 containerd[1677]: time="2026-01-24T00:35:30.459331550Z" level=info msg="connecting to shim cc81b29faa82aede42773988510cad10562bf73d063861ddfb6dbc81ba6de756" address="unix:///run/containerd/s/4ac161e2d5363b0982a62ba85fc562c3aee6ae98a4eba2d0855dd407514c25be" protocol=ttrpc version=3 Jan 24 00:35:30.487377 systemd[1]: Started cri-containerd-cc81b29faa82aede42773988510cad10562bf73d063861ddfb6dbc81ba6de756.scope - libcontainer container cc81b29faa82aede42773988510cad10562bf73d063861ddfb6dbc81ba6de756. Jan 24 00:35:30.535000 audit: BPF prog-id=164 op=LOAD Jan 24 00:35:30.535000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3382 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:30.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383162323966616138326165646534323737333938383531306361 Jan 24 00:35:30.535000 audit: BPF prog-id=165 op=LOAD Jan 24 00:35:30.535000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3382 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:30.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383162323966616138326165646534323737333938383531306361 Jan 24 00:35:30.535000 audit: BPF prog-id=165 op=UNLOAD Jan 24 00:35:30.535000 audit[3529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3382 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:30.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383162323966616138326165646534323737333938383531306361 Jan 24 00:35:30.535000 audit: BPF prog-id=164 op=UNLOAD Jan 24 00:35:30.535000 audit[3529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3382 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:30.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383162323966616138326165646534323737333938383531306361 Jan 24 00:35:30.535000 audit: BPF prog-id=166 op=LOAD Jan 24 00:35:30.535000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3382 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:30.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383162323966616138326165646534323737333938383531306361 Jan 24 00:35:30.554654 containerd[1677]: time="2026-01-24T00:35:30.554606421Z" level=info msg="StartContainer for \"cc81b29faa82aede42773988510cad10562bf73d063861ddfb6dbc81ba6de756\" returns successfully" Jan 24 00:35:30.564636 systemd[1]: cri-containerd-cc81b29faa82aede42773988510cad10562bf73d063861ddfb6dbc81ba6de756.scope: Deactivated successfully. Jan 24 00:35:30.565000 audit: BPF prog-id=166 op=UNLOAD Jan 24 00:35:30.568355 containerd[1677]: time="2026-01-24T00:35:30.568328955Z" level=info msg="received container exit event container_id:\"cc81b29faa82aede42773988510cad10562bf73d063861ddfb6dbc81ba6de756\" id:\"cc81b29faa82aede42773988510cad10562bf73d063861ddfb6dbc81ba6de756\" pid:3541 exited_at:{seconds:1769214930 nanos:566868240}" Jan 24 00:35:30.590012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc81b29faa82aede42773988510cad10562bf73d063861ddfb6dbc81ba6de756-rootfs.mount: Deactivated successfully. Jan 24 00:35:30.757988 kubelet[2887]: I0124 00:35:30.757961 2887 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:35:30.759732 containerd[1677]: time="2026-01-24T00:35:30.759380635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:35:31.656895 kubelet[2887]: E0124 00:35:31.656544 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:35:32.153220 kubelet[2887]: I0124 00:35:32.153173 2887 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:35:32.183000 audit[3580]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:32.185473 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 24 00:35:32.185517 kernel: audit: type=1325 audit(1769214932.183:560): table=filter:117 family=2 entries=21 op=nft_register_rule pid=3580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:32.183000 audit[3580]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd94dc5aa0 a2=0 a3=7ffd94dc5a8c items=0 ppid=3004 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:32.192527 kernel: audit: type=1300 audit(1769214932.183:560): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd94dc5aa0 a2=0 a3=7ffd94dc5a8c items=0 ppid=3004 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:32.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:32.197647 kernel: audit: type=1327 audit(1769214932.183:560): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:32.197000 audit[3580]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:32.200998 kernel: audit: type=1325 audit(1769214932.197:561): table=nat:118 family=2 entries=19 op=nft_register_chain pid=3580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:32.197000 audit[3580]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd94dc5aa0 a2=0 a3=7ffd94dc5a8c items=0 ppid=3004 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:32.204571 kernel: audit: type=1300 audit(1769214932.197:561): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd94dc5aa0 a2=0 a3=7ffd94dc5a8c items=0 ppid=3004 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:32.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:32.209195 kernel: audit: type=1327 audit(1769214932.197:561): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:33.655477 kubelet[2887]: E0124 00:35:33.655428 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:35:34.410255 containerd[1677]: time="2026-01-24T00:35:34.409891431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:34.412896 containerd[1677]: time="2026-01-24T00:35:34.412862952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 24 00:35:34.413825 containerd[1677]: time="2026-01-24T00:35:34.413803763Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:34.415915 containerd[1677]: time="2026-01-24T00:35:34.415894021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:34.416870 containerd[1677]: time="2026-01-24T00:35:34.416849544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.6574441s" Jan 24 00:35:34.416914 containerd[1677]: time="2026-01-24T00:35:34.416874558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:35:34.418778 containerd[1677]: time="2026-01-24T00:35:34.418754566Z" level=info msg="CreateContainer within sandbox \"e0e9ab8876325fa8b8f077171d715cf84dd568927c09990dc961bf7513640acc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:35:34.430815 containerd[1677]: time="2026-01-24T00:35:34.430784461Z" level=info msg="Container 06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:35:34.440440 containerd[1677]: time="2026-01-24T00:35:34.440400223Z" level=info msg="CreateContainer within sandbox \"e0e9ab8876325fa8b8f077171d715cf84dd568927c09990dc961bf7513640acc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104\"" Jan 24 00:35:34.443073 containerd[1677]: time="2026-01-24T00:35:34.441270100Z" level=info msg="StartContainer for \"06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104\"" Jan 24 00:35:34.443073 containerd[1677]: time="2026-01-24T00:35:34.442435994Z" level=info msg="connecting to shim 06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104" address="unix:///run/containerd/s/4ac161e2d5363b0982a62ba85fc562c3aee6ae98a4eba2d0855dd407514c25be" protocol=ttrpc version=3 Jan 24 00:35:34.464428 systemd[1]: Started cri-containerd-06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104.scope - libcontainer container 06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104. Jan 24 00:35:34.514000 audit: BPF prog-id=167 op=LOAD Jan 24 00:35:34.514000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3382 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:34.518826 kernel: audit: type=1334 audit(1769214934.514:562): prog-id=167 op=LOAD Jan 24 00:35:34.518864 kernel: audit: type=1300 audit(1769214934.514:562): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3382 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:34.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313535623530363535353333626363653638663538323434653866 Jan 24 00:35:34.523237 kernel: audit: type=1327 audit(1769214934.514:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313535623530363535353333626363653638663538323434653866 Jan 24 00:35:34.514000 audit: BPF prog-id=168 op=LOAD Jan 24 00:35:34.526242 kernel: audit: type=1334 audit(1769214934.514:563): prog-id=168 op=LOAD Jan 24 00:35:34.514000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3382 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:34.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313535623530363535353333626363653638663538323434653866 Jan 24 00:35:34.515000 audit: BPF prog-id=168 op=UNLOAD Jan 24 00:35:34.515000 audit[3589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3382 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:34.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313535623530363535353333626363653638663538323434653866 Jan 24 00:35:34.515000 audit: BPF prog-id=167 op=UNLOAD Jan 24 00:35:34.515000 audit[3589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3382 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:34.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313535623530363535353333626363653638663538323434653866 Jan 24 00:35:34.515000 audit: BPF prog-id=169 op=LOAD Jan 24 00:35:34.515000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3382 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:34.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313535623530363535353333626363653638663538323434653866 Jan 24 00:35:34.546284 containerd[1677]: time="2026-01-24T00:35:34.546251647Z" level=info msg="StartContainer for \"06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104\" returns successfully" Jan 24 00:35:34.991056 containerd[1677]: time="2026-01-24T00:35:34.991014180Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:35:34.992890 systemd[1]: cri-containerd-06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104.scope: Deactivated successfully. Jan 24 00:35:34.993345 systemd[1]: cri-containerd-06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104.scope: Consumed 419ms CPU time, 192.5M memory peak, 171.3M written to disk. Jan 24 00:35:34.994594 containerd[1677]: time="2026-01-24T00:35:34.994573171Z" level=info msg="received container exit event container_id:\"06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104\" id:\"06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104\" pid:3603 exited_at:{seconds:1769214934 nanos:994420514}" Jan 24 00:35:34.996000 audit: BPF prog-id=169 op=UNLOAD Jan 24 00:35:35.005931 kubelet[2887]: I0124 00:35:35.004823 2887 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:35:35.017043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06155b50655533bcce68f58244e8fad671d0ce63a2ab652fc354425105a7c104-rootfs.mount: Deactivated successfully. Jan 24 00:35:35.055722 kubelet[2887]: I0124 00:35:35.055695 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b9wp\" (UniqueName: \"kubernetes.io/projected/09588ba1-4540-4c39-997a-6cc3fbc4c31f-kube-api-access-6b9wp\") pod \"coredns-668d6bf9bc-f49pv\" (UID: \"09588ba1-4540-4c39-997a-6cc3fbc4c31f\") " pod="kube-system/coredns-668d6bf9bc-f49pv" Jan 24 00:35:35.055892 kubelet[2887]: I0124 00:35:35.055728 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f695da60-5d07-4b6c-8f24-e49612d3b40f-tigera-ca-bundle\") pod \"calico-kube-controllers-64cbbc8dcd-w5nn4\" (UID: \"f695da60-5d07-4b6c-8f24-e49612d3b40f\") " pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" Jan 24 00:35:35.055892 kubelet[2887]: I0124 00:35:35.055745 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpt5k\" (UniqueName: \"kubernetes.io/projected/921dd561-510c-4c7e-86db-9c24acbd795a-kube-api-access-xpt5k\") pod \"coredns-668d6bf9bc-bqxrd\" (UID: \"921dd561-510c-4c7e-86db-9c24acbd795a\") " pod="kube-system/coredns-668d6bf9bc-bqxrd" Jan 24 00:35:35.055892 kubelet[2887]: I0124 00:35:35.055761 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09588ba1-4540-4c39-997a-6cc3fbc4c31f-config-volume\") pod \"coredns-668d6bf9bc-f49pv\" (UID: \"09588ba1-4540-4c39-997a-6cc3fbc4c31f\") " pod="kube-system/coredns-668d6bf9bc-f49pv" Jan 24 00:35:35.055892 kubelet[2887]: I0124 00:35:35.055777 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dj67\" (UniqueName: \"kubernetes.io/projected/f695da60-5d07-4b6c-8f24-e49612d3b40f-kube-api-access-8dj67\") pod \"calico-kube-controllers-64cbbc8dcd-w5nn4\" (UID: \"f695da60-5d07-4b6c-8f24-e49612d3b40f\") " pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" Jan 24 00:35:35.055892 kubelet[2887]: I0124 00:35:35.055791 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/921dd561-510c-4c7e-86db-9c24acbd795a-config-volume\") pod \"coredns-668d6bf9bc-bqxrd\" (UID: \"921dd561-510c-4c7e-86db-9c24acbd795a\") " pod="kube-system/coredns-668d6bf9bc-bqxrd" Jan 24 00:35:35.057282 systemd[1]: Created slice kubepods-burstable-pod921dd561_510c_4c7e_86db_9c24acbd795a.slice - libcontainer container kubepods-burstable-pod921dd561_510c_4c7e_86db_9c24acbd795a.slice. Jan 24 00:35:35.069230 kubelet[2887]: W0124 00:35:35.068361 2887 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4593-0-0-7-bbab233dcd" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4593-0-0-7-bbab233dcd' and this object Jan 24 00:35:35.069386 kubelet[2887]: W0124 00:35:35.068385 2887 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4593-0-0-7-bbab233dcd" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4593-0-0-7-bbab233dcd' and this object Jan 24 00:35:35.069439 kubelet[2887]: E0124 00:35:35.069406 2887 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4593-0-0-7-bbab233dcd\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4593-0-0-7-bbab233dcd' and this object" logger="UnhandledError" Jan 24 00:35:35.069546 kubelet[2887]: E0124 00:35:35.069378 2887 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4593-0-0-7-bbab233dcd\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4593-0-0-7-bbab233dcd' and this object" logger="UnhandledError" Jan 24 00:35:35.073812 systemd[1]: Created slice kubepods-burstable-pod09588ba1_4540_4c39_997a_6cc3fbc4c31f.slice - libcontainer container kubepods-burstable-pod09588ba1_4540_4c39_997a_6cc3fbc4c31f.slice. Jan 24 00:35:35.083669 systemd[1]: Created slice kubepods-besteffort-podf695da60_5d07_4b6c_8f24_e49612d3b40f.slice - libcontainer container kubepods-besteffort-podf695da60_5d07_4b6c_8f24_e49612d3b40f.slice. Jan 24 00:35:35.092799 systemd[1]: Created slice kubepods-besteffort-podc22b1229_18a4_4f62_9e22_d2ed4a3840d1.slice - libcontainer container kubepods-besteffort-podc22b1229_18a4_4f62_9e22_d2ed4a3840d1.slice. Jan 24 00:35:35.098446 systemd[1]: Created slice kubepods-besteffort-podbd7b47a3_e9a9_4695_b299_4e30c1f99caf.slice - libcontainer container kubepods-besteffort-podbd7b47a3_e9a9_4695_b299_4e30c1f99caf.slice. Jan 24 00:35:35.103922 systemd[1]: Created slice kubepods-besteffort-pod729c9040_8d83_4f84_a894_b4696b3e10ce.slice - libcontainer container kubepods-besteffort-pod729c9040_8d83_4f84_a894_b4696b3e10ce.slice. Jan 24 00:35:35.111829 systemd[1]: Created slice kubepods-besteffort-podcc6833e1_bfb0_4eb5_9ff2_60bda2e93290.slice - libcontainer container kubepods-besteffort-podcc6833e1_bfb0_4eb5_9ff2_60bda2e93290.slice. Jan 24 00:35:35.156821 kubelet[2887]: I0124 00:35:35.156776 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6833e1-bfb0-4eb5-9ff2-60bda2e93290-goldmane-ca-bundle\") pod \"goldmane-666569f655-dmlzg\" (UID: \"cc6833e1-bfb0-4eb5-9ff2-60bda2e93290\") " pod="calico-system/goldmane-666569f655-dmlzg" Jan 24 00:35:35.156821 kubelet[2887]: I0124 00:35:35.156821 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jrdw\" (UniqueName: \"kubernetes.io/projected/bd7b47a3-e9a9-4695-b299-4e30c1f99caf-kube-api-access-6jrdw\") pod \"calico-apiserver-7b7c7f79fb-9wsxh\" (UID: \"bd7b47a3-e9a9-4695-b299-4e30c1f99caf\") " pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" Jan 24 00:35:35.157876 kubelet[2887]: I0124 00:35:35.156837 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/729c9040-8d83-4f84-a894-b4696b3e10ce-whisker-backend-key-pair\") pod \"whisker-77974dcd58-jlh76\" (UID: \"729c9040-8d83-4f84-a894-b4696b3e10ce\") " pod="calico-system/whisker-77974dcd58-jlh76" Jan 24 00:35:35.157876 kubelet[2887]: I0124 00:35:35.156852 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/729c9040-8d83-4f84-a894-b4696b3e10ce-whisker-ca-bundle\") pod \"whisker-77974dcd58-jlh76\" (UID: \"729c9040-8d83-4f84-a894-b4696b3e10ce\") " pod="calico-system/whisker-77974dcd58-jlh76" Jan 24 00:35:35.157876 kubelet[2887]: I0124 00:35:35.156870 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94kw\" (UniqueName: \"kubernetes.io/projected/729c9040-8d83-4f84-a894-b4696b3e10ce-kube-api-access-c94kw\") pod \"whisker-77974dcd58-jlh76\" (UID: \"729c9040-8d83-4f84-a894-b4696b3e10ce\") " pod="calico-system/whisker-77974dcd58-jlh76" Jan 24 00:35:35.157876 kubelet[2887]: I0124 00:35:35.156897 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnqcg\" (UniqueName: \"kubernetes.io/projected/c22b1229-18a4-4f62-9e22-d2ed4a3840d1-kube-api-access-pnqcg\") pod \"calico-apiserver-7b7c7f79fb-x9lmq\" (UID: \"c22b1229-18a4-4f62-9e22-d2ed4a3840d1\") " pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" Jan 24 00:35:35.157876 kubelet[2887]: I0124 00:35:35.156922 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd7b47a3-e9a9-4695-b299-4e30c1f99caf-calico-apiserver-certs\") pod \"calico-apiserver-7b7c7f79fb-9wsxh\" (UID: \"bd7b47a3-e9a9-4695-b299-4e30c1f99caf\") " pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" Jan 24 00:35:35.157990 kubelet[2887]: I0124 00:35:35.156941 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cc6833e1-bfb0-4eb5-9ff2-60bda2e93290-goldmane-key-pair\") pod \"goldmane-666569f655-dmlzg\" (UID: \"cc6833e1-bfb0-4eb5-9ff2-60bda2e93290\") " pod="calico-system/goldmane-666569f655-dmlzg" Jan 24 00:35:35.157990 kubelet[2887]: I0124 00:35:35.156980 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c22b1229-18a4-4f62-9e22-d2ed4a3840d1-calico-apiserver-certs\") pod \"calico-apiserver-7b7c7f79fb-x9lmq\" (UID: \"c22b1229-18a4-4f62-9e22-d2ed4a3840d1\") " pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" Jan 24 00:35:35.157990 kubelet[2887]: I0124 00:35:35.156999 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghhgh\" (UniqueName: \"kubernetes.io/projected/cc6833e1-bfb0-4eb5-9ff2-60bda2e93290-kube-api-access-ghhgh\") pod \"goldmane-666569f655-dmlzg\" (UID: \"cc6833e1-bfb0-4eb5-9ff2-60bda2e93290\") " pod="calico-system/goldmane-666569f655-dmlzg" Jan 24 00:35:35.157990 kubelet[2887]: I0124 00:35:35.157030 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6833e1-bfb0-4eb5-9ff2-60bda2e93290-config\") pod \"goldmane-666569f655-dmlzg\" (UID: \"cc6833e1-bfb0-4eb5-9ff2-60bda2e93290\") " pod="calico-system/goldmane-666569f655-dmlzg" Jan 24 00:35:35.368941 containerd[1677]: time="2026-01-24T00:35:35.368887137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bqxrd,Uid:921dd561-510c-4c7e-86db-9c24acbd795a,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:35.385295 containerd[1677]: time="2026-01-24T00:35:35.385247636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f49pv,Uid:09588ba1-4540-4c39-997a-6cc3fbc4c31f,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:35.389232 containerd[1677]: time="2026-01-24T00:35:35.389191442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cbbc8dcd-w5nn4,Uid:f695da60-5d07-4b6c-8f24-e49612d3b40f,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:35.410718 containerd[1677]: time="2026-01-24T00:35:35.410688237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77974dcd58-jlh76,Uid:729c9040-8d83-4f84-a894-b4696b3e10ce,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:35.415779 containerd[1677]: time="2026-01-24T00:35:35.415282470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dmlzg,Uid:cc6833e1-bfb0-4eb5-9ff2-60bda2e93290,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:35.443963 containerd[1677]: time="2026-01-24T00:35:35.443931318Z" level=error msg="Failed to destroy network for sandbox \"8c3e94867dc623d46d5432d1c869e166c8f5a3b57d64e758927bf573c9d716e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.445794 systemd[1]: run-netns-cni\x2dad7b5c43\x2d94f1\x2de7e3\x2d7901\x2dff39c441c9d2.mount: Deactivated successfully. Jan 24 00:35:35.448148 containerd[1677]: time="2026-01-24T00:35:35.448105911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bqxrd,Uid:921dd561-510c-4c7e-86db-9c24acbd795a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c3e94867dc623d46d5432d1c869e166c8f5a3b57d64e758927bf573c9d716e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.448517 kubelet[2887]: E0124 00:35:35.448450 2887 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c3e94867dc623d46d5432d1c869e166c8f5a3b57d64e758927bf573c9d716e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.448570 kubelet[2887]: E0124 00:35:35.448544 2887 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c3e94867dc623d46d5432d1c869e166c8f5a3b57d64e758927bf573c9d716e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bqxrd" Jan 24 00:35:35.448596 kubelet[2887]: E0124 00:35:35.448565 2887 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c3e94867dc623d46d5432d1c869e166c8f5a3b57d64e758927bf573c9d716e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bqxrd" Jan 24 00:35:35.448619 kubelet[2887]: E0124 00:35:35.448604 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bqxrd_kube-system(921dd561-510c-4c7e-86db-9c24acbd795a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bqxrd_kube-system(921dd561-510c-4c7e-86db-9c24acbd795a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c3e94867dc623d46d5432d1c869e166c8f5a3b57d64e758927bf573c9d716e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bqxrd" podUID="921dd561-510c-4c7e-86db-9c24acbd795a" Jan 24 00:35:35.506670 containerd[1677]: time="2026-01-24T00:35:35.506612467Z" level=error msg="Failed to destroy network for sandbox \"d5f31f860dcf2b0806bacef12953949e4363ab738c445ef63a32e3e8f4f3636b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.509084 systemd[1]: run-netns-cni\x2dd0ec65c5\x2d24db\x2dc5f8\x2d4987\x2d16185bee92d5.mount: Deactivated successfully. Jan 24 00:35:35.511123 containerd[1677]: time="2026-01-24T00:35:35.511007846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cbbc8dcd-w5nn4,Uid:f695da60-5d07-4b6c-8f24-e49612d3b40f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5f31f860dcf2b0806bacef12953949e4363ab738c445ef63a32e3e8f4f3636b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.511334 kubelet[2887]: E0124 00:35:35.511197 2887 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5f31f860dcf2b0806bacef12953949e4363ab738c445ef63a32e3e8f4f3636b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.511385 kubelet[2887]: E0124 00:35:35.511347 2887 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5f31f860dcf2b0806bacef12953949e4363ab738c445ef63a32e3e8f4f3636b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" Jan 24 00:35:35.511385 kubelet[2887]: E0124 00:35:35.511368 2887 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5f31f860dcf2b0806bacef12953949e4363ab738c445ef63a32e3e8f4f3636b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" Jan 24 00:35:35.511512 kubelet[2887]: E0124 00:35:35.511490 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64cbbc8dcd-w5nn4_calico-system(f695da60-5d07-4b6c-8f24-e49612d3b40f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64cbbc8dcd-w5nn4_calico-system(f695da60-5d07-4b6c-8f24-e49612d3b40f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5f31f860dcf2b0806bacef12953949e4363ab738c445ef63a32e3e8f4f3636b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:35:35.521227 containerd[1677]: time="2026-01-24T00:35:35.519585872Z" level=error msg="Failed to destroy network for sandbox \"b62a3b777eeed47d590278df04d56d77440edb45d34c0a3be475db141767c122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.521200 systemd[1]: run-netns-cni\x2d2522aaf5\x2db3fc\x2d1b08\x2d57fd\x2daeefe9772148.mount: Deactivated successfully. Jan 24 00:35:35.523930 containerd[1677]: time="2026-01-24T00:35:35.523898713Z" level=error msg="Failed to destroy network for sandbox \"795015a485379f742fa66a8b67fa855a2815fb7bec0c932bff6bb1dca332d316\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.526214 systemd[1]: run-netns-cni\x2d2287eba9\x2db3ff\x2de56e\x2d0865\x2d5e2f4b6d6d74.mount: Deactivated successfully. Jan 24 00:35:35.527975 containerd[1677]: time="2026-01-24T00:35:35.527858156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f49pv,Uid:09588ba1-4540-4c39-997a-6cc3fbc4c31f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62a3b777eeed47d590278df04d56d77440edb45d34c0a3be475db141767c122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.528472 kubelet[2887]: E0124 00:35:35.528411 2887 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62a3b777eeed47d590278df04d56d77440edb45d34c0a3be475db141767c122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.528559 kubelet[2887]: E0124 00:35:35.528488 2887 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62a3b777eeed47d590278df04d56d77440edb45d34c0a3be475db141767c122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-f49pv" Jan 24 00:35:35.528559 kubelet[2887]: E0124 00:35:35.528508 2887 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62a3b777eeed47d590278df04d56d77440edb45d34c0a3be475db141767c122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-f49pv" Jan 24 00:35:35.528559 kubelet[2887]: E0124 00:35:35.528542 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-f49pv_kube-system(09588ba1-4540-4c39-997a-6cc3fbc4c31f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-f49pv_kube-system(09588ba1-4540-4c39-997a-6cc3fbc4c31f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b62a3b777eeed47d590278df04d56d77440edb45d34c0a3be475db141767c122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-f49pv" podUID="09588ba1-4540-4c39-997a-6cc3fbc4c31f" Jan 24 00:35:35.529846 containerd[1677]: time="2026-01-24T00:35:35.529353482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77974dcd58-jlh76,Uid:729c9040-8d83-4f84-a894-b4696b3e10ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"795015a485379f742fa66a8b67fa855a2815fb7bec0c932bff6bb1dca332d316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.530251 kubelet[2887]: E0124 00:35:35.530055 2887 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"795015a485379f742fa66a8b67fa855a2815fb7bec0c932bff6bb1dca332d316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.530310 kubelet[2887]: E0124 00:35:35.530268 2887 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"795015a485379f742fa66a8b67fa855a2815fb7bec0c932bff6bb1dca332d316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77974dcd58-jlh76" Jan 24 00:35:35.530310 kubelet[2887]: E0124 00:35:35.530286 2887 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"795015a485379f742fa66a8b67fa855a2815fb7bec0c932bff6bb1dca332d316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77974dcd58-jlh76" Jan 24 00:35:35.530363 kubelet[2887]: E0124 00:35:35.530323 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77974dcd58-jlh76_calico-system(729c9040-8d83-4f84-a894-b4696b3e10ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77974dcd58-jlh76_calico-system(729c9040-8d83-4f84-a894-b4696b3e10ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"795015a485379f742fa66a8b67fa855a2815fb7bec0c932bff6bb1dca332d316\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77974dcd58-jlh76" podUID="729c9040-8d83-4f84-a894-b4696b3e10ce" Jan 24 00:35:35.534539 containerd[1677]: time="2026-01-24T00:35:35.534451829Z" level=error msg="Failed to destroy network for sandbox \"66eda3771211d2be6c517cde97c940f310bce96e6fd4049de1e07a625bba74bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.536897 containerd[1677]: time="2026-01-24T00:35:35.536763673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dmlzg,Uid:cc6833e1-bfb0-4eb5-9ff2-60bda2e93290,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66eda3771211d2be6c517cde97c940f310bce96e6fd4049de1e07a625bba74bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.537280 kubelet[2887]: E0124 00:35:35.537243 2887 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66eda3771211d2be6c517cde97c940f310bce96e6fd4049de1e07a625bba74bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.537334 kubelet[2887]: E0124 00:35:35.537310 2887 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66eda3771211d2be6c517cde97c940f310bce96e6fd4049de1e07a625bba74bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dmlzg" Jan 24 00:35:35.537334 kubelet[2887]: E0124 00:35:35.537329 2887 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66eda3771211d2be6c517cde97c940f310bce96e6fd4049de1e07a625bba74bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dmlzg" Jan 24 00:35:35.537378 kubelet[2887]: E0124 00:35:35.537359 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dmlzg_calico-system(cc6833e1-bfb0-4eb5-9ff2-60bda2e93290)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dmlzg_calico-system(cc6833e1-bfb0-4eb5-9ff2-60bda2e93290)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66eda3771211d2be6c517cde97c940f310bce96e6fd4049de1e07a625bba74bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:35:35.661188 systemd[1]: Created slice kubepods-besteffort-podfe469055_bc9a_468d_9724_6bf26a67fb3d.slice - libcontainer container kubepods-besteffort-podfe469055_bc9a_468d_9724_6bf26a67fb3d.slice. Jan 24 00:35:35.664247 containerd[1677]: time="2026-01-24T00:35:35.664223140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8bpc,Uid:fe469055-bc9a-468d-9724-6bf26a67fb3d,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:35.709822 containerd[1677]: time="2026-01-24T00:35:35.709782653Z" level=error msg="Failed to destroy network for sandbox \"6825a9bde768dbd033f29fa4743e548a0524b7485433265ae4976b91156f614f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.712434 containerd[1677]: time="2026-01-24T00:35:35.712351283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8bpc,Uid:fe469055-bc9a-468d-9724-6bf26a67fb3d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6825a9bde768dbd033f29fa4743e548a0524b7485433265ae4976b91156f614f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.712829 kubelet[2887]: E0124 00:35:35.712540 2887 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6825a9bde768dbd033f29fa4743e548a0524b7485433265ae4976b91156f614f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:35.712829 kubelet[2887]: E0124 00:35:35.712698 2887 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6825a9bde768dbd033f29fa4743e548a0524b7485433265ae4976b91156f614f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x8bpc" Jan 24 00:35:35.712829 kubelet[2887]: E0124 00:35:35.712717 2887 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6825a9bde768dbd033f29fa4743e548a0524b7485433265ae4976b91156f614f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x8bpc" Jan 24 00:35:35.713285 kubelet[2887]: E0124 00:35:35.712768 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6825a9bde768dbd033f29fa4743e548a0524b7485433265ae4976b91156f614f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:35:35.784113 containerd[1677]: time="2026-01-24T00:35:35.784074298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:35:36.276326 kubelet[2887]: E0124 00:35:36.275981 2887 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:35:36.276326 kubelet[2887]: E0124 00:35:36.276020 2887 projected.go:194] Error preparing data for projected volume kube-api-access-6jrdw for pod calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:35:36.276326 kubelet[2887]: E0124 00:35:36.276104 2887 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd7b47a3-e9a9-4695-b299-4e30c1f99caf-kube-api-access-6jrdw podName:bd7b47a3-e9a9-4695-b299-4e30c1f99caf nodeName:}" failed. No retries permitted until 2026-01-24 00:35:36.776082451 +0000 UTC m=+31.197013760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6jrdw" (UniqueName: "kubernetes.io/projected/bd7b47a3-e9a9-4695-b299-4e30c1f99caf-kube-api-access-6jrdw") pod "calico-apiserver-7b7c7f79fb-9wsxh" (UID: "bd7b47a3-e9a9-4695-b299-4e30c1f99caf") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:35:36.276326 kubelet[2887]: E0124 00:35:36.276323 2887 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:35:36.276326 kubelet[2887]: E0124 00:35:36.276335 2887 projected.go:194] Error preparing data for projected volume kube-api-access-pnqcg for pod calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:35:36.277458 kubelet[2887]: E0124 00:35:36.276377 2887 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c22b1229-18a4-4f62-9e22-d2ed4a3840d1-kube-api-access-pnqcg podName:c22b1229-18a4-4f62-9e22-d2ed4a3840d1 nodeName:}" failed. No retries permitted until 2026-01-24 00:35:36.776362246 +0000 UTC m=+31.197293556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pnqcg" (UniqueName: "kubernetes.io/projected/c22b1229-18a4-4f62-9e22-d2ed4a3840d1-kube-api-access-pnqcg") pod "calico-apiserver-7b7c7f79fb-x9lmq" (UID: "c22b1229-18a4-4f62-9e22-d2ed4a3840d1") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:35:36.433165 systemd[1]: run-netns-cni\x2d637c4e0a\x2d4135\x2d2097\x2d923f\x2dbaab21e30b05.mount: Deactivated successfully. Jan 24 00:35:36.897010 containerd[1677]: time="2026-01-24T00:35:36.896946288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7c7f79fb-x9lmq,Uid:c22b1229-18a4-4f62-9e22-d2ed4a3840d1,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:35:36.901647 containerd[1677]: time="2026-01-24T00:35:36.901572146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7c7f79fb-9wsxh,Uid:bd7b47a3-e9a9-4695-b299-4e30c1f99caf,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:35:36.966449 containerd[1677]: time="2026-01-24T00:35:36.966395287Z" level=error msg="Failed to destroy network for sandbox \"7cce070afe4e0f0379cf2fd6066cfd656460c1dc8271603b98e6a89c40426105\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:36.970029 containerd[1677]: time="2026-01-24T00:35:36.969440849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7c7f79fb-9wsxh,Uid:bd7b47a3-e9a9-4695-b299-4e30c1f99caf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cce070afe4e0f0379cf2fd6066cfd656460c1dc8271603b98e6a89c40426105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:36.970506 kubelet[2887]: E0124 00:35:36.969763 2887 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cce070afe4e0f0379cf2fd6066cfd656460c1dc8271603b98e6a89c40426105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:36.970506 kubelet[2887]: E0124 00:35:36.970498 2887 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cce070afe4e0f0379cf2fd6066cfd656460c1dc8271603b98e6a89c40426105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" Jan 24 00:35:36.970693 kubelet[2887]: E0124 00:35:36.970526 2887 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cce070afe4e0f0379cf2fd6066cfd656460c1dc8271603b98e6a89c40426105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" Jan 24 00:35:36.970693 kubelet[2887]: E0124 00:35:36.970581 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b7c7f79fb-9wsxh_calico-apiserver(bd7b47a3-e9a9-4695-b299-4e30c1f99caf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b7c7f79fb-9wsxh_calico-apiserver(bd7b47a3-e9a9-4695-b299-4e30c1f99caf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cce070afe4e0f0379cf2fd6066cfd656460c1dc8271603b98e6a89c40426105\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:35:36.980226 containerd[1677]: time="2026-01-24T00:35:36.980173616Z" level=error msg="Failed to destroy network for sandbox \"5b89eb04821dce513769f688da5a229a4d7295c6d76b734244c4fd5f130163f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:36.983113 containerd[1677]: time="2026-01-24T00:35:36.983075888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7c7f79fb-x9lmq,Uid:c22b1229-18a4-4f62-9e22-d2ed4a3840d1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b89eb04821dce513769f688da5a229a4d7295c6d76b734244c4fd5f130163f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:36.983349 kubelet[2887]: E0124 00:35:36.983312 2887 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b89eb04821dce513769f688da5a229a4d7295c6d76b734244c4fd5f130163f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:35:36.983438 kubelet[2887]: E0124 00:35:36.983425 2887 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b89eb04821dce513769f688da5a229a4d7295c6d76b734244c4fd5f130163f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" Jan 24 00:35:36.983553 kubelet[2887]: E0124 00:35:36.983479 2887 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b89eb04821dce513769f688da5a229a4d7295c6d76b734244c4fd5f130163f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" Jan 24 00:35:36.983553 kubelet[2887]: E0124 00:35:36.983520 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b7c7f79fb-x9lmq_calico-apiserver(c22b1229-18a4-4f62-9e22-d2ed4a3840d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b7c7f79fb-x9lmq_calico-apiserver(c22b1229-18a4-4f62-9e22-d2ed4a3840d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b89eb04821dce513769f688da5a229a4d7295c6d76b734244c4fd5f130163f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:35:37.436590 systemd[1]: run-netns-cni\x2d37c166bb\x2d5868\x2dba15\x2d5cf3\x2d4d92eaabc00e.mount: Deactivated successfully. Jan 24 00:35:37.436711 systemd[1]: run-netns-cni\x2d4c58c832\x2da8dd\x2dcd05\x2d9bc9\x2de2b58401d203.mount: Deactivated successfully. Jan 24 00:35:42.600173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103493498.mount: Deactivated successfully. Jan 24 00:35:42.635146 containerd[1677]: time="2026-01-24T00:35:42.635094456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:42.636356 containerd[1677]: time="2026-01-24T00:35:42.636220634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 24 00:35:42.637487 containerd[1677]: time="2026-01-24T00:35:42.637465056Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:42.639749 containerd[1677]: time="2026-01-24T00:35:42.639730139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:35:42.640163 containerd[1677]: time="2026-01-24T00:35:42.640115065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.856006888s" Jan 24 00:35:42.640259 containerd[1677]: time="2026-01-24T00:35:42.640247137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:35:42.648644 containerd[1677]: time="2026-01-24T00:35:42.648561307Z" level=info msg="CreateContainer within sandbox \"e0e9ab8876325fa8b8f077171d715cf84dd568927c09990dc961bf7513640acc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:35:42.662239 containerd[1677]: time="2026-01-24T00:35:42.662186544Z" level=info msg="Container bef7b6d6f6309af35b9cdef2028f20ce0f15ac5d1cbe54ae7d46bbb3e9e47197: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:35:42.671755 containerd[1677]: time="2026-01-24T00:35:42.671722986Z" level=info msg="CreateContainer within sandbox \"e0e9ab8876325fa8b8f077171d715cf84dd568927c09990dc961bf7513640acc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bef7b6d6f6309af35b9cdef2028f20ce0f15ac5d1cbe54ae7d46bbb3e9e47197\"" Jan 24 00:35:42.673074 containerd[1677]: time="2026-01-24T00:35:42.673043387Z" level=info msg="StartContainer for \"bef7b6d6f6309af35b9cdef2028f20ce0f15ac5d1cbe54ae7d46bbb3e9e47197\"" Jan 24 00:35:42.675068 containerd[1677]: time="2026-01-24T00:35:42.675041732Z" level=info msg="connecting to shim bef7b6d6f6309af35b9cdef2028f20ce0f15ac5d1cbe54ae7d46bbb3e9e47197" address="unix:///run/containerd/s/4ac161e2d5363b0982a62ba85fc562c3aee6ae98a4eba2d0855dd407514c25be" protocol=ttrpc version=3 Jan 24 00:35:42.719402 systemd[1]: Started cri-containerd-bef7b6d6f6309af35b9cdef2028f20ce0f15ac5d1cbe54ae7d46bbb3e9e47197.scope - libcontainer container bef7b6d6f6309af35b9cdef2028f20ce0f15ac5d1cbe54ae7d46bbb3e9e47197. Jan 24 00:35:42.774248 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 24 00:35:42.774352 kernel: audit: type=1334 audit(1769214942.771:568): prog-id=170 op=LOAD Jan 24 00:35:42.771000 audit: BPF prog-id=170 op=LOAD Jan 24 00:35:42.771000 audit[3860]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3382 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:42.777701 kernel: audit: type=1300 audit(1769214942.771:568): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3382 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:42.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265663762366436663633303961663335623963646566323032386632 Jan 24 00:35:42.782422 kernel: audit: type=1327 audit(1769214942.771:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265663762366436663633303961663335623963646566323032386632 Jan 24 00:35:42.771000 audit: BPF prog-id=171 op=LOAD Jan 24 00:35:42.771000 audit[3860]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3382 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:42.789546 kernel: audit: type=1334 audit(1769214942.771:569): prog-id=171 op=LOAD Jan 24 00:35:42.789594 kernel: audit: type=1300 audit(1769214942.771:569): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3382 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:42.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265663762366436663633303961663335623963646566323032386632 Jan 24 00:35:42.794002 kernel: audit: type=1327 audit(1769214942.771:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265663762366436663633303961663335623963646566323032386632 Jan 24 00:35:42.772000 audit: BPF prog-id=171 op=UNLOAD Jan 24 00:35:42.797705 kernel: audit: type=1334 audit(1769214942.772:570): prog-id=171 op=UNLOAD Jan 24 00:35:42.772000 audit[3860]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3382 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:42.799973 kernel: audit: type=1300 audit(1769214942.772:570): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3382 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:42.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265663762366436663633303961663335623963646566323032386632 Jan 24 00:35:42.804511 kernel: audit: type=1327 audit(1769214942.772:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265663762366436663633303961663335623963646566323032386632 Jan 24 00:35:42.772000 audit: BPF prog-id=170 op=UNLOAD Jan 24 00:35:42.807263 kernel: audit: type=1334 audit(1769214942.772:571): prog-id=170 op=UNLOAD Jan 24 00:35:42.772000 audit[3860]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3382 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:42.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265663762366436663633303961663335623963646566323032386632 Jan 24 00:35:42.772000 audit: BPF prog-id=172 op=LOAD Jan 24 00:35:42.772000 audit[3860]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3382 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:42.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265663762366436663633303961663335623963646566323032386632 Jan 24 00:35:42.819229 containerd[1677]: time="2026-01-24T00:35:42.819155139Z" level=info msg="StartContainer for \"bef7b6d6f6309af35b9cdef2028f20ce0f15ac5d1cbe54ae7d46bbb3e9e47197\" returns successfully" Jan 24 00:35:42.907852 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:35:42.907958 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:35:43.108592 kubelet[2887]: I0124 00:35:43.108557 2887 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c94kw\" (UniqueName: \"kubernetes.io/projected/729c9040-8d83-4f84-a894-b4696b3e10ce-kube-api-access-c94kw\") pod \"729c9040-8d83-4f84-a894-b4696b3e10ce\" (UID: \"729c9040-8d83-4f84-a894-b4696b3e10ce\") " Jan 24 00:35:43.108592 kubelet[2887]: I0124 00:35:43.108598 2887 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/729c9040-8d83-4f84-a894-b4696b3e10ce-whisker-backend-key-pair\") pod \"729c9040-8d83-4f84-a894-b4696b3e10ce\" (UID: \"729c9040-8d83-4f84-a894-b4696b3e10ce\") " Jan 24 00:35:43.108965 kubelet[2887]: I0124 00:35:43.108622 2887 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/729c9040-8d83-4f84-a894-b4696b3e10ce-whisker-ca-bundle\") pod \"729c9040-8d83-4f84-a894-b4696b3e10ce\" (UID: \"729c9040-8d83-4f84-a894-b4696b3e10ce\") " Jan 24 00:35:43.108965 kubelet[2887]: I0124 00:35:43.108926 2887 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729c9040-8d83-4f84-a894-b4696b3e10ce-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "729c9040-8d83-4f84-a894-b4696b3e10ce" (UID: "729c9040-8d83-4f84-a894-b4696b3e10ce"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:35:43.116817 kubelet[2887]: I0124 00:35:43.116779 2887 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729c9040-8d83-4f84-a894-b4696b3e10ce-kube-api-access-c94kw" (OuterVolumeSpecName: "kube-api-access-c94kw") pod "729c9040-8d83-4f84-a894-b4696b3e10ce" (UID: "729c9040-8d83-4f84-a894-b4696b3e10ce"). InnerVolumeSpecName "kube-api-access-c94kw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:35:43.117135 kubelet[2887]: I0124 00:35:43.117120 2887 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729c9040-8d83-4f84-a894-b4696b3e10ce-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "729c9040-8d83-4f84-a894-b4696b3e10ce" (UID: "729c9040-8d83-4f84-a894-b4696b3e10ce"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:35:43.208864 kubelet[2887]: I0124 00:35:43.208827 2887 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/729c9040-8d83-4f84-a894-b4696b3e10ce-whisker-ca-bundle\") on node \"ci-4593-0-0-7-bbab233dcd\" DevicePath \"\"" Jan 24 00:35:43.208864 kubelet[2887]: I0124 00:35:43.208858 2887 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c94kw\" (UniqueName: \"kubernetes.io/projected/729c9040-8d83-4f84-a894-b4696b3e10ce-kube-api-access-c94kw\") on node \"ci-4593-0-0-7-bbab233dcd\" DevicePath \"\"" Jan 24 00:35:43.208864 kubelet[2887]: I0124 00:35:43.208868 2887 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/729c9040-8d83-4f84-a894-b4696b3e10ce-whisker-backend-key-pair\") on node \"ci-4593-0-0-7-bbab233dcd\" DevicePath \"\"" Jan 24 00:35:43.601882 systemd[1]: var-lib-kubelet-pods-729c9040\x2d8d83\x2d4f84\x2da894\x2db4696b3e10ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc94kw.mount: Deactivated successfully. Jan 24 00:35:43.602548 systemd[1]: var-lib-kubelet-pods-729c9040\x2d8d83\x2d4f84\x2da894\x2db4696b3e10ce-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:35:43.663024 systemd[1]: Removed slice kubepods-besteffort-pod729c9040_8d83_4f84_a894_b4696b3e10ce.slice - libcontainer container kubepods-besteffort-pod729c9040_8d83_4f84_a894_b4696b3e10ce.slice. Jan 24 00:35:43.834908 kubelet[2887]: I0124 00:35:43.834313 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9gf56" podStartSLOduration=2.462547781 podStartE2EDuration="18.834295847s" podCreationTimestamp="2026-01-24 00:35:25 +0000 UTC" firstStartedPulling="2026-01-24 00:35:26.269332877 +0000 UTC m=+20.690264183" lastFinishedPulling="2026-01-24 00:35:42.641080944 +0000 UTC m=+37.062012249" observedRunningTime="2026-01-24 00:35:43.833421484 +0000 UTC m=+38.254352815" watchObservedRunningTime="2026-01-24 00:35:43.834295847 +0000 UTC m=+38.255227158" Jan 24 00:35:43.903442 systemd[1]: Created slice kubepods-besteffort-pod3de22d99_6b03_4053_8689_b6779dcd23d2.slice - libcontainer container kubepods-besteffort-pod3de22d99_6b03_4053_8689_b6779dcd23d2.slice. Jan 24 00:35:44.017020 kubelet[2887]: I0124 00:35:44.016679 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3de22d99-6b03-4053-8689-b6779dcd23d2-whisker-backend-key-pair\") pod \"whisker-7c97449468-s944b\" (UID: \"3de22d99-6b03-4053-8689-b6779dcd23d2\") " pod="calico-system/whisker-7c97449468-s944b" Jan 24 00:35:44.017020 kubelet[2887]: I0124 00:35:44.016739 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3de22d99-6b03-4053-8689-b6779dcd23d2-whisker-ca-bundle\") pod \"whisker-7c97449468-s944b\" (UID: \"3de22d99-6b03-4053-8689-b6779dcd23d2\") " pod="calico-system/whisker-7c97449468-s944b" Jan 24 00:35:44.017020 kubelet[2887]: I0124 00:35:44.016769 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mwn\" (UniqueName: \"kubernetes.io/projected/3de22d99-6b03-4053-8689-b6779dcd23d2-kube-api-access-w4mwn\") pod \"whisker-7c97449468-s944b\" (UID: \"3de22d99-6b03-4053-8689-b6779dcd23d2\") " pod="calico-system/whisker-7c97449468-s944b" Jan 24 00:35:44.210193 containerd[1677]: time="2026-01-24T00:35:44.210057497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c97449468-s944b,Uid:3de22d99-6b03-4053-8689-b6779dcd23d2,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:44.479106 systemd-networkd[1573]: cali32dd80f0624: Link UP Jan 24 00:35:44.479799 systemd-networkd[1573]: cali32dd80f0624: Gained carrier Jan 24 00:35:44.505437 containerd[1677]: 2026-01-24 00:35:44.278 [INFO][3952] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:35:44.505437 containerd[1677]: 2026-01-24 00:35:44.361 [INFO][3952] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0 whisker-7c97449468- calico-system 3de22d99-6b03-4053-8689-b6779dcd23d2 867 0 2026-01-24 00:35:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c97449468 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4593-0-0-7-bbab233dcd whisker-7c97449468-s944b eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali32dd80f0624 [] [] }} ContainerID="615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" Namespace="calico-system" Pod="whisker-7c97449468-s944b" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-" Jan 24 00:35:44.505437 containerd[1677]: 2026-01-24 00:35:44.361 [INFO][3952] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" Namespace="calico-system" Pod="whisker-7c97449468-s944b" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0" Jan 24 00:35:44.505437 containerd[1677]: 2026-01-24 00:35:44.415 [INFO][4050] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" HandleID="k8s-pod-network.615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" Workload="ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0" Jan 24 00:35:44.505664 containerd[1677]: 2026-01-24 00:35:44.415 [INFO][4050] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" HandleID="k8s-pod-network.615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" Workload="ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593-0-0-7-bbab233dcd", "pod":"whisker-7c97449468-s944b", "timestamp":"2026-01-24 00:35:44.415014957 +0000 UTC"}, Hostname:"ci-4593-0-0-7-bbab233dcd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:35:44.505664 containerd[1677]: 2026-01-24 00:35:44.415 [INFO][4050] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:35:44.505664 containerd[1677]: 2026-01-24 00:35:44.415 [INFO][4050] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:35:44.505664 containerd[1677]: 2026-01-24 00:35:44.415 [INFO][4050] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-7-bbab233dcd' Jan 24 00:35:44.505664 containerd[1677]: 2026-01-24 00:35:44.423 [INFO][4050] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:44.505664 containerd[1677]: 2026-01-24 00:35:44.429 [INFO][4050] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:44.505664 containerd[1677]: 2026-01-24 00:35:44.435 [INFO][4050] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:44.505664 containerd[1677]: 2026-01-24 00:35:44.437 [INFO][4050] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:44.505664 containerd[1677]: 2026-01-24 00:35:44.440 [INFO][4050] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:44.505851 containerd[1677]: 2026-01-24 00:35:44.441 [INFO][4050] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:44.505851 containerd[1677]: 2026-01-24 00:35:44.443 [INFO][4050] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b Jan 24 00:35:44.505851 containerd[1677]: 2026-01-24 00:35:44.449 [INFO][4050] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:44.505851 containerd[1677]: 2026-01-24 00:35:44.458 [INFO][4050] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.129/26] block=192.168.23.128/26 handle="k8s-pod-network.615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:44.505851 containerd[1677]: 2026-01-24 00:35:44.458 [INFO][4050] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.129/26] handle="k8s-pod-network.615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:44.505851 containerd[1677]: 2026-01-24 00:35:44.459 [INFO][4050] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:35:44.505851 containerd[1677]: 2026-01-24 00:35:44.459 [INFO][4050] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.129/26] IPv6=[] ContainerID="615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" HandleID="k8s-pod-network.615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" Workload="ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0" Jan 24 00:35:44.505994 containerd[1677]: 2026-01-24 00:35:44.462 [INFO][3952] cni-plugin/k8s.go 418: Populated endpoint ContainerID="615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" Namespace="calico-system" Pod="whisker-7c97449468-s944b" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0", GenerateName:"whisker-7c97449468-", Namespace:"calico-system", SelfLink:"", UID:"3de22d99-6b03-4053-8689-b6779dcd23d2", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c97449468", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"", Pod:"whisker-7c97449468-s944b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.23.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali32dd80f0624", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:44.505994 containerd[1677]: 2026-01-24 00:35:44.463 [INFO][3952] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.129/32] ContainerID="615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" Namespace="calico-system" Pod="whisker-7c97449468-s944b" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0" Jan 24 00:35:44.506070 containerd[1677]: 2026-01-24 00:35:44.463 [INFO][3952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32dd80f0624 ContainerID="615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" Namespace="calico-system" Pod="whisker-7c97449468-s944b" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0" Jan 24 00:35:44.506070 containerd[1677]: 2026-01-24 00:35:44.482 [INFO][3952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" Namespace="calico-system" Pod="whisker-7c97449468-s944b" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0" Jan 24 00:35:44.506110 containerd[1677]: 2026-01-24 00:35:44.483 [INFO][3952] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" Namespace="calico-system" Pod="whisker-7c97449468-s944b" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0", GenerateName:"whisker-7c97449468-", Namespace:"calico-system", SelfLink:"", UID:"3de22d99-6b03-4053-8689-b6779dcd23d2", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c97449468", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b", Pod:"whisker-7c97449468-s944b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.23.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali32dd80f0624", MAC:"1e:e3:76:fd:6d:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:44.506160 containerd[1677]: 2026-01-24 00:35:44.500 [INFO][3952] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" Namespace="calico-system" Pod="whisker-7c97449468-s944b" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-whisker--7c97449468--s944b-eth0" Jan 24 00:35:44.574963 containerd[1677]: time="2026-01-24T00:35:44.574915911Z" level=info msg="connecting to shim 615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b" address="unix:///run/containerd/s/40f48ece6bd36c8631b2e4a0fbc68b51b6028552ff0220ca06dc66fcd9917dbd" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:44.602517 systemd[1]: Started cri-containerd-615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b.scope - libcontainer container 615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b. Jan 24 00:35:44.628000 audit: BPF prog-id=173 op=LOAD Jan 24 00:35:44.628000 audit: BPF prog-id=174 op=LOAD Jan 24 00:35:44.628000 audit[4105]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356230663830653336313930326538353835333164383865366433 Jan 24 00:35:44.628000 audit: BPF prog-id=174 op=UNLOAD Jan 24 00:35:44.628000 audit[4105]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356230663830653336313930326538353835333164383865366433 Jan 24 00:35:44.629000 audit: BPF prog-id=175 op=LOAD Jan 24 00:35:44.629000 audit[4105]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356230663830653336313930326538353835333164383865366433 Jan 24 00:35:44.629000 audit: BPF prog-id=176 op=LOAD Jan 24 00:35:44.629000 audit[4105]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356230663830653336313930326538353835333164383865366433 Jan 24 00:35:44.629000 audit: BPF prog-id=176 op=UNLOAD Jan 24 00:35:44.629000 audit[4105]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356230663830653336313930326538353835333164383865366433 Jan 24 00:35:44.629000 audit: BPF prog-id=175 op=UNLOAD Jan 24 00:35:44.629000 audit[4105]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356230663830653336313930326538353835333164383865366433 Jan 24 00:35:44.629000 audit: BPF prog-id=177 op=LOAD Jan 24 00:35:44.629000 audit[4105]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4093 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356230663830653336313930326538353835333164383865366433 Jan 24 00:35:44.672472 containerd[1677]: time="2026-01-24T00:35:44.672419146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c97449468-s944b,Uid:3de22d99-6b03-4053-8689-b6779dcd23d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"615b0f80e361902e858531d88e6d3a5af4c008bbe120c9d0ff8e3fcc3285933b\"" Jan 24 00:35:44.674513 containerd[1677]: time="2026-01-24T00:35:44.674222365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:35:44.681000 audit: BPF prog-id=178 op=LOAD Jan 24 00:35:44.681000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd5e092e00 a2=98 a3=1fffffffffffffff items=0 ppid=3975 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.681000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:35:44.681000 audit: BPF prog-id=178 op=UNLOAD Jan 24 00:35:44.681000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd5e092dd0 a3=0 items=0 ppid=3975 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.681000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:35:44.681000 audit: BPF prog-id=179 op=LOAD Jan 24 00:35:44.681000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd5e092ce0 a2=94 a3=3 items=0 ppid=3975 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.681000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:35:44.682000 audit: BPF prog-id=179 op=UNLOAD Jan 24 00:35:44.682000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd5e092ce0 a2=94 a3=3 items=0 ppid=3975 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.682000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:35:44.682000 audit: BPF prog-id=180 op=LOAD Jan 24 00:35:44.682000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd5e092d20 a2=94 a3=7ffd5e092f00 items=0 ppid=3975 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.682000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:35:44.682000 audit: BPF prog-id=180 op=UNLOAD Jan 24 00:35:44.682000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd5e092d20 a2=94 a3=7ffd5e092f00 items=0 ppid=3975 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.682000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:35:44.683000 audit: BPF prog-id=181 op=LOAD Jan 24 00:35:44.683000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefa23d0b0 a2=98 a3=3 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.683000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.683000 audit: BPF prog-id=181 op=UNLOAD Jan 24 00:35:44.683000 audit[4143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffefa23d080 a3=0 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.683000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.683000 audit: BPF prog-id=182 op=LOAD Jan 24 00:35:44.683000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffefa23cea0 a2=94 a3=54428f items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.683000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.683000 audit: BPF prog-id=182 op=UNLOAD Jan 24 00:35:44.683000 audit[4143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffefa23cea0 a2=94 a3=54428f items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.683000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.683000 audit: BPF prog-id=183 op=LOAD Jan 24 00:35:44.683000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffefa23ced0 a2=94 a3=2 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.683000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.683000 audit: BPF prog-id=183 op=UNLOAD Jan 24 00:35:44.683000 audit[4143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffefa23ced0 a2=0 a3=2 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.683000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.852000 audit: BPF prog-id=184 op=LOAD Jan 24 00:35:44.852000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffefa23cd90 a2=94 a3=1 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.852000 audit: BPF prog-id=184 op=UNLOAD Jan 24 00:35:44.852000 audit[4143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffefa23cd90 a2=94 a3=1 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.863000 audit: BPF prog-id=185 op=LOAD Jan 24 00:35:44.863000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffefa23cd80 a2=94 a3=4 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.863000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.863000 audit: BPF prog-id=185 op=UNLOAD Jan 24 00:35:44.863000 audit[4143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffefa23cd80 a2=0 a3=4 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.863000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.864000 audit: BPF prog-id=186 op=LOAD Jan 24 00:35:44.864000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefa23cbe0 a2=94 a3=5 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.864000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.864000 audit: BPF prog-id=186 op=UNLOAD Jan 24 00:35:44.864000 audit[4143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffefa23cbe0 a2=0 a3=5 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.864000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.864000 audit: BPF prog-id=187 op=LOAD Jan 24 00:35:44.864000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffefa23ce00 a2=94 a3=6 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.864000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.864000 audit: BPF prog-id=187 op=UNLOAD Jan 24 00:35:44.864000 audit[4143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffefa23ce00 a2=0 a3=6 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.864000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.865000 audit: BPF prog-id=188 op=LOAD Jan 24 00:35:44.865000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffefa23c5b0 a2=94 a3=88 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.865000 audit: BPF prog-id=189 op=LOAD Jan 24 00:35:44.865000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffefa23c430 a2=94 a3=2 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.865000 audit: BPF prog-id=189 op=UNLOAD Jan 24 00:35:44.865000 audit[4143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffefa23c460 a2=0 a3=7ffefa23c560 items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.865000 audit: BPF prog-id=188 op=UNLOAD Jan 24 00:35:44.865000 audit[4143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=40fcd10 a2=0 a3=e13a9f28433ee52f items=0 ppid=3975 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:35:44.875000 audit: BPF prog-id=190 op=LOAD Jan 24 00:35:44.875000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeba776030 a2=98 a3=1999999999999999 items=0 ppid=3975 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.875000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:35:44.875000 audit: BPF prog-id=190 op=UNLOAD Jan 24 00:35:44.875000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeba776000 a3=0 items=0 ppid=3975 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.875000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:35:44.875000 audit: BPF prog-id=191 op=LOAD Jan 24 00:35:44.875000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeba775f10 a2=94 a3=ffff items=0 ppid=3975 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.875000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:35:44.875000 audit: BPF prog-id=191 op=UNLOAD Jan 24 00:35:44.875000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeba775f10 a2=94 a3=ffff items=0 ppid=3975 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.875000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:35:44.875000 audit: BPF prog-id=192 op=LOAD Jan 24 00:35:44.875000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeba775f50 a2=94 a3=7ffeba776130 items=0 ppid=3975 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.875000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:35:44.875000 audit: BPF prog-id=192 op=UNLOAD Jan 24 00:35:44.875000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeba775f50 a2=94 a3=7ffeba776130 items=0 ppid=3975 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.875000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:35:44.942371 systemd-networkd[1573]: vxlan.calico: Link UP Jan 24 00:35:44.942379 systemd-networkd[1573]: vxlan.calico: Gained carrier Jan 24 00:35:44.971000 audit: BPF prog-id=193 op=LOAD Jan 24 00:35:44.971000 audit[4195]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff3816350 a2=98 a3=0 items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.971000 audit: BPF prog-id=193 op=UNLOAD Jan 24 00:35:44.971000 audit[4195]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff3816320 a3=0 items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.971000 audit: BPF prog-id=194 op=LOAD Jan 24 00:35:44.971000 audit[4195]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff3816160 a2=94 a3=54428f items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.972000 audit: BPF prog-id=194 op=UNLOAD Jan 24 00:35:44.972000 audit[4195]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff3816160 a2=94 a3=54428f items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.972000 audit: BPF prog-id=195 op=LOAD Jan 24 00:35:44.972000 audit[4195]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff3816190 a2=94 a3=2 items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.972000 audit: BPF prog-id=195 op=UNLOAD Jan 24 00:35:44.972000 audit[4195]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff3816190 a2=0 a3=2 items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.972000 audit: BPF prog-id=196 op=LOAD Jan 24 00:35:44.972000 audit[4195]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff3815f40 a2=94 a3=4 items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.972000 audit: BPF prog-id=196 op=UNLOAD Jan 24 00:35:44.972000 audit[4195]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffff3815f40 a2=94 a3=4 items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.972000 audit: BPF prog-id=197 op=LOAD Jan 24 00:35:44.972000 audit[4195]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff3816040 a2=94 a3=7ffff38161c0 items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.972000 audit: BPF prog-id=197 op=UNLOAD Jan 24 00:35:44.972000 audit[4195]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffff3816040 a2=0 a3=7ffff38161c0 items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.973000 audit: BPF prog-id=198 op=LOAD Jan 24 00:35:44.973000 audit[4195]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff3815770 a2=94 a3=2 items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.973000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.973000 audit: BPF prog-id=198 op=UNLOAD Jan 24 00:35:44.973000 audit[4195]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffff3815770 a2=0 a3=2 items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.973000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.973000 audit: BPF prog-id=199 op=LOAD Jan 24 00:35:44.973000 audit[4195]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff3815870 a2=94 a3=30 items=0 ppid=3975 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.973000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:35:44.984000 audit: BPF prog-id=200 op=LOAD Jan 24 00:35:44.984000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5751e5d0 a2=98 a3=0 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.984000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:44.984000 audit: BPF prog-id=200 op=UNLOAD Jan 24 00:35:44.984000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff5751e5a0 a3=0 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.984000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:44.984000 audit: BPF prog-id=201 op=LOAD Jan 24 00:35:44.984000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5751e3c0 a2=94 a3=54428f items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.984000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:44.984000 audit: BPF prog-id=201 op=UNLOAD Jan 24 00:35:44.984000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff5751e3c0 a2=94 a3=54428f items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.984000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:44.984000 audit: BPF prog-id=202 op=LOAD Jan 24 00:35:44.984000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5751e3f0 a2=94 a3=2 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.984000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:44.985000 audit: BPF prog-id=202 op=UNLOAD Jan 24 00:35:44.985000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff5751e3f0 a2=0 a3=2 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:44.985000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.005168 containerd[1677]: time="2026-01-24T00:35:45.005009980Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:35:45.006643 containerd[1677]: time="2026-01-24T00:35:45.006558583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:35:45.006854 kubelet[2887]: E0124 00:35:45.006809 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:35:45.007137 containerd[1677]: time="2026-01-24T00:35:45.006816032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:35:45.007457 kubelet[2887]: E0124 00:35:45.007198 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:35:45.012511 kubelet[2887]: E0124 00:35:45.012452 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5f190ef84d634309a1c72b69d1983ada,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4mwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c97449468-s944b_calico-system(3de22d99-6b03-4053-8689-b6779dcd23d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:45.015204 containerd[1677]: time="2026-01-24T00:35:45.014790314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:35:45.142000 audit: BPF prog-id=203 op=LOAD Jan 24 00:35:45.142000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5751e2b0 a2=94 a3=1 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.142000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.142000 audit: BPF prog-id=203 op=UNLOAD Jan 24 00:35:45.142000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff5751e2b0 a2=94 a3=1 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.142000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.152000 audit: BPF prog-id=204 op=LOAD Jan 24 00:35:45.152000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff5751e2a0 a2=94 a3=4 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.152000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.152000 audit: BPF prog-id=204 op=UNLOAD Jan 24 00:35:45.152000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff5751e2a0 a2=0 a3=4 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.152000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.152000 audit: BPF prog-id=205 op=LOAD Jan 24 00:35:45.152000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff5751e100 a2=94 a3=5 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.152000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.152000 audit: BPF prog-id=205 op=UNLOAD Jan 24 00:35:45.152000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff5751e100 a2=0 a3=5 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.152000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.153000 audit: BPF prog-id=206 op=LOAD Jan 24 00:35:45.153000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff5751e320 a2=94 a3=6 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.153000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.153000 audit: BPF prog-id=206 op=UNLOAD Jan 24 00:35:45.153000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff5751e320 a2=0 a3=6 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.153000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.153000 audit: BPF prog-id=207 op=LOAD Jan 24 00:35:45.153000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff5751dad0 a2=94 a3=88 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.153000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.153000 audit: BPF prog-id=208 op=LOAD Jan 24 00:35:45.153000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff5751d950 a2=94 a3=2 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.153000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.153000 audit: BPF prog-id=208 op=UNLOAD Jan 24 00:35:45.153000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff5751d980 a2=0 a3=7fff5751da80 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.153000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.153000 audit: BPF prog-id=207 op=UNLOAD Jan 24 00:35:45.153000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=8473d10 a2=0 a3=f094816a81b0b44 items=0 ppid=3975 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.153000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:35:45.157000 audit: BPF prog-id=199 op=UNLOAD Jan 24 00:35:45.157000 audit[3975]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000ca0280 a2=0 a3=0 items=0 ppid=3966 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.157000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 24 00:35:45.212000 audit[4222]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4222 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:45.212000 audit[4222]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc582a9260 a2=0 a3=7ffc582a924c items=0 ppid=3975 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.212000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:45.217000 audit[4223]: NETFILTER_CFG table=mangle:120 family=2 entries=16 op=nft_register_chain pid=4223 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:45.217000 audit[4223]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff90e263e0 a2=0 a3=7fff90e263cc items=0 ppid=3975 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.217000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:45.225000 audit[4221]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4221 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:45.225000 audit[4221]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc66f2a0a0 a2=0 a3=7ffc66f2a08c items=0 ppid=3975 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.225000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:45.226000 audit[4224]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4224 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:45.226000 audit[4224]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7fffbe1ad190 a2=0 a3=7fffbe1ad17c items=0 ppid=3975 pid=4224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.226000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:45.344626 containerd[1677]: time="2026-01-24T00:35:45.344570485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:35:45.346236 containerd[1677]: time="2026-01-24T00:35:45.346188312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:35:45.346403 containerd[1677]: time="2026-01-24T00:35:45.346222974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:35:45.346469 kubelet[2887]: E0124 00:35:45.346422 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:35:45.346517 kubelet[2887]: E0124 00:35:45.346482 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:35:45.346835 kubelet[2887]: E0124 00:35:45.346601 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4mwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c97449468-s944b_calico-system(3de22d99-6b03-4053-8689-b6779dcd23d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:45.348041 kubelet[2887]: E0124 00:35:45.348017 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:35:45.664015 kubelet[2887]: I0124 00:35:45.663627 2887 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729c9040-8d83-4f84-a894-b4696b3e10ce" path="/var/lib/kubelet/pods/729c9040-8d83-4f84-a894-b4696b3e10ce/volumes" Jan 24 00:35:45.824224 kubelet[2887]: E0124 00:35:45.824180 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:35:45.883000 audit[4260]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:45.883000 audit[4260]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc4d676460 a2=0 a3=7ffc4d67644c items=0 ppid=3004 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.883000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:45.889000 audit[4260]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:45.889000 audit[4260]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc4d676460 a2=0 a3=0 items=0 ppid=3004 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:45.889000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:46.157477 systemd-networkd[1573]: cali32dd80f0624: Gained IPv6LL Jan 24 00:35:46.541506 systemd-networkd[1573]: vxlan.calico: Gained IPv6LL Jan 24 00:35:46.656672 containerd[1677]: time="2026-01-24T00:35:46.656610299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dmlzg,Uid:cc6833e1-bfb0-4eb5-9ff2-60bda2e93290,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:46.795622 systemd-networkd[1573]: calib0347955392: Link UP Jan 24 00:35:46.795746 systemd-networkd[1573]: calib0347955392: Gained carrier Jan 24 00:35:46.811877 containerd[1677]: 2026-01-24 00:35:46.712 [INFO][4265] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0 goldmane-666569f655- calico-system cc6833e1-bfb0-4eb5-9ff2-60bda2e93290 794 0 2026-01-24 00:35:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4593-0-0-7-bbab233dcd goldmane-666569f655-dmlzg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib0347955392 [] [] }} ContainerID="10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" Namespace="calico-system" Pod="goldmane-666569f655-dmlzg" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-" Jan 24 00:35:46.811877 containerd[1677]: 2026-01-24 00:35:46.712 [INFO][4265] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" Namespace="calico-system" Pod="goldmane-666569f655-dmlzg" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0" Jan 24 00:35:46.811877 containerd[1677]: 2026-01-24 00:35:46.753 [INFO][4277] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" HandleID="k8s-pod-network.10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" Workload="ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0" Jan 24 00:35:46.812062 containerd[1677]: 2026-01-24 00:35:46.754 [INFO][4277] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" HandleID="k8s-pod-network.10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" Workload="ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593-0-0-7-bbab233dcd", "pod":"goldmane-666569f655-dmlzg", "timestamp":"2026-01-24 00:35:46.753904317 +0000 UTC"}, Hostname:"ci-4593-0-0-7-bbab233dcd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:35:46.812062 containerd[1677]: 2026-01-24 00:35:46.754 [INFO][4277] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:35:46.812062 containerd[1677]: 2026-01-24 00:35:46.754 [INFO][4277] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:35:46.812062 containerd[1677]: 2026-01-24 00:35:46.754 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-7-bbab233dcd' Jan 24 00:35:46.812062 containerd[1677]: 2026-01-24 00:35:46.761 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:46.812062 containerd[1677]: 2026-01-24 00:35:46.767 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:46.812062 containerd[1677]: 2026-01-24 00:35:46.772 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:46.812062 containerd[1677]: 2026-01-24 00:35:46.774 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:46.812062 containerd[1677]: 2026-01-24 00:35:46.777 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:46.812261 containerd[1677]: 2026-01-24 00:35:46.777 [INFO][4277] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:46.812261 containerd[1677]: 2026-01-24 00:35:46.779 [INFO][4277] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29 Jan 24 00:35:46.812261 containerd[1677]: 2026-01-24 00:35:46.784 [INFO][4277] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:46.812261 containerd[1677]: 2026-01-24 00:35:46.790 [INFO][4277] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.130/26] block=192.168.23.128/26 handle="k8s-pod-network.10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:46.812261 containerd[1677]: 2026-01-24 00:35:46.790 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.130/26] handle="k8s-pod-network.10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:46.812261 containerd[1677]: 2026-01-24 00:35:46.790 [INFO][4277] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:35:46.812261 containerd[1677]: 2026-01-24 00:35:46.790 [INFO][4277] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.130/26] IPv6=[] ContainerID="10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" HandleID="k8s-pod-network.10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" Workload="ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0" Jan 24 00:35:46.812386 containerd[1677]: 2026-01-24 00:35:46.793 [INFO][4265] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" Namespace="calico-system" Pod="goldmane-666569f655-dmlzg" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cc6833e1-bfb0-4eb5-9ff2-60bda2e93290", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"", Pod:"goldmane-666569f655-dmlzg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0347955392", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:46.812441 containerd[1677]: 2026-01-24 00:35:46.793 [INFO][4265] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.130/32] ContainerID="10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" Namespace="calico-system" Pod="goldmane-666569f655-dmlzg" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0" Jan 24 00:35:46.812441 containerd[1677]: 2026-01-24 00:35:46.793 [INFO][4265] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0347955392 ContainerID="10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" Namespace="calico-system" Pod="goldmane-666569f655-dmlzg" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0" Jan 24 00:35:46.812441 containerd[1677]: 2026-01-24 00:35:46.795 [INFO][4265] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" Namespace="calico-system" Pod="goldmane-666569f655-dmlzg" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0" Jan 24 00:35:46.812496 containerd[1677]: 2026-01-24 00:35:46.795 [INFO][4265] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" Namespace="calico-system" Pod="goldmane-666569f655-dmlzg" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cc6833e1-bfb0-4eb5-9ff2-60bda2e93290", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29", Pod:"goldmane-666569f655-dmlzg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0347955392", MAC:"0e:53:b6:a1:41:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:46.812872 containerd[1677]: 2026-01-24 00:35:46.806 [INFO][4265] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" Namespace="calico-system" Pod="goldmane-666569f655-dmlzg" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-goldmane--666569f655--dmlzg-eth0" Jan 24 00:35:46.824000 audit[4291]: NETFILTER_CFG table=filter:125 family=2 entries=44 op=nft_register_chain pid=4291 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:46.824000 audit[4291]: SYSCALL arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffcb685a780 a2=0 a3=7ffcb685a76c items=0 ppid=3975 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:46.824000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:46.838405 containerd[1677]: time="2026-01-24T00:35:46.838343607Z" level=info msg="connecting to shim 10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29" address="unix:///run/containerd/s/3a31ebd95045994ce2e6c2f16782d719fcd2636ec227abc2ab4776f36cb8b458" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:46.862394 systemd[1]: Started cri-containerd-10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29.scope - libcontainer container 10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29. Jan 24 00:35:46.871000 audit: BPF prog-id=209 op=LOAD Jan 24 00:35:46.872000 audit: BPF prog-id=210 op=LOAD Jan 24 00:35:46.872000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4300 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:46.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130383730623732643064353634663037396331663834386339306666 Jan 24 00:35:46.872000 audit: BPF prog-id=210 op=UNLOAD Jan 24 00:35:46.872000 audit[4311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4300 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:46.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130383730623732643064353634663037396331663834386339306666 Jan 24 00:35:46.872000 audit: BPF prog-id=211 op=LOAD Jan 24 00:35:46.872000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4300 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:46.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130383730623732643064353634663037396331663834386339306666 Jan 24 00:35:46.872000 audit: BPF prog-id=212 op=LOAD Jan 24 00:35:46.872000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4300 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:46.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130383730623732643064353634663037396331663834386339306666 Jan 24 00:35:46.872000 audit: BPF prog-id=212 op=UNLOAD Jan 24 00:35:46.872000 audit[4311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4300 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:46.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130383730623732643064353634663037396331663834386339306666 Jan 24 00:35:46.872000 audit: BPF prog-id=211 op=UNLOAD Jan 24 00:35:46.872000 audit[4311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4300 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:46.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130383730623732643064353634663037396331663834386339306666 Jan 24 00:35:46.873000 audit: BPF prog-id=213 op=LOAD Jan 24 00:35:46.873000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4300 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:46.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130383730623732643064353634663037396331663834386339306666 Jan 24 00:35:46.908947 containerd[1677]: time="2026-01-24T00:35:46.908879676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dmlzg,Uid:cc6833e1-bfb0-4eb5-9ff2-60bda2e93290,Namespace:calico-system,Attempt:0,} returns sandbox id \"10870b72d0d564f079c1f848c90ffa1bcce756c2747c763fa8be27bfcf2abb29\"" Jan 24 00:35:46.910353 containerd[1677]: time="2026-01-24T00:35:46.910318129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:35:47.249748 containerd[1677]: time="2026-01-24T00:35:47.249674552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:35:47.250810 containerd[1677]: time="2026-01-24T00:35:47.250770728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:35:47.250938 containerd[1677]: time="2026-01-24T00:35:47.250844930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:35:47.251032 kubelet[2887]: E0124 00:35:47.250973 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:35:47.252517 kubelet[2887]: E0124 00:35:47.251043 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:35:47.252517 kubelet[2887]: E0124 00:35:47.251174 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghhgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dmlzg_calico-system(cc6833e1-bfb0-4eb5-9ff2-60bda2e93290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:47.252681 kubelet[2887]: E0124 00:35:47.252597 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:35:47.656180 containerd[1677]: time="2026-01-24T00:35:47.656006547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8bpc,Uid:fe469055-bc9a-468d-9724-6bf26a67fb3d,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:47.656563 containerd[1677]: time="2026-01-24T00:35:47.656259561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bqxrd,Uid:921dd561-510c-4c7e-86db-9c24acbd795a,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:47.656563 containerd[1677]: time="2026-01-24T00:35:47.656373485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f49pv,Uid:09588ba1-4540-4c39-997a-6cc3fbc4c31f,Namespace:kube-system,Attempt:0,}" Jan 24 00:35:47.821402 systemd-networkd[1573]: calib0347955392: Gained IPv6LL Jan 24 00:35:47.829639 kubelet[2887]: E0124 00:35:47.829583 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:35:47.847292 systemd-networkd[1573]: calif7d8372b696: Link UP Jan 24 00:35:47.847441 systemd-networkd[1573]: calif7d8372b696: Gained carrier Jan 24 00:35:47.865694 containerd[1677]: 2026-01-24 00:35:47.725 [INFO][4339] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0 csi-node-driver- calico-system fe469055-bc9a-468d-9724-6bf26a67fb3d 685 0 2026-01-24 00:35:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4593-0-0-7-bbab233dcd csi-node-driver-x8bpc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif7d8372b696 [] [] }} ContainerID="58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" Namespace="calico-system" Pod="csi-node-driver-x8bpc" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-" Jan 24 00:35:47.865694 containerd[1677]: 2026-01-24 00:35:47.726 [INFO][4339] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" Namespace="calico-system" Pod="csi-node-driver-x8bpc" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0" Jan 24 00:35:47.865694 containerd[1677]: 2026-01-24 00:35:47.769 [INFO][4378] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" HandleID="k8s-pod-network.58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" Workload="ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0" Jan 24 00:35:47.866109 containerd[1677]: 2026-01-24 00:35:47.769 [INFO][4378] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" HandleID="k8s-pod-network.58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" Workload="ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593-0-0-7-bbab233dcd", "pod":"csi-node-driver-x8bpc", "timestamp":"2026-01-24 00:35:47.769740988 +0000 UTC"}, Hostname:"ci-4593-0-0-7-bbab233dcd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:35:47.866109 containerd[1677]: 2026-01-24 00:35:47.769 [INFO][4378] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:35:47.866109 containerd[1677]: 2026-01-24 00:35:47.770 [INFO][4378] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:35:47.866109 containerd[1677]: 2026-01-24 00:35:47.770 [INFO][4378] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-7-bbab233dcd' Jan 24 00:35:47.866109 containerd[1677]: 2026-01-24 00:35:47.779 [INFO][4378] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.866109 containerd[1677]: 2026-01-24 00:35:47.787 [INFO][4378] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.866109 containerd[1677]: 2026-01-24 00:35:47.800 [INFO][4378] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.866109 containerd[1677]: 2026-01-24 00:35:47.803 [INFO][4378] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.866109 containerd[1677]: 2026-01-24 00:35:47.805 [INFO][4378] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.866469 containerd[1677]: 2026-01-24 00:35:47.805 [INFO][4378] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.866469 containerd[1677]: 2026-01-24 00:35:47.807 [INFO][4378] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37 Jan 24 00:35:47.866469 containerd[1677]: 2026-01-24 00:35:47.812 [INFO][4378] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.866469 containerd[1677]: 2026-01-24 00:35:47.822 [INFO][4378] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.131/26] block=192.168.23.128/26 handle="k8s-pod-network.58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.866469 containerd[1677]: 2026-01-24 00:35:47.822 [INFO][4378] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.131/26] handle="k8s-pod-network.58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.866469 containerd[1677]: 2026-01-24 00:35:47.822 [INFO][4378] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:35:47.866469 containerd[1677]: 2026-01-24 00:35:47.823 [INFO][4378] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.131/26] IPv6=[] ContainerID="58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" HandleID="k8s-pod-network.58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" Workload="ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0" Jan 24 00:35:47.866720 containerd[1677]: 2026-01-24 00:35:47.840 [INFO][4339] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" Namespace="calico-system" Pod="csi-node-driver-x8bpc" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe469055-bc9a-468d-9724-6bf26a67fb3d", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"", Pod:"csi-node-driver-x8bpc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7d8372b696", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:47.866868 containerd[1677]: 2026-01-24 00:35:47.841 [INFO][4339] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.131/32] ContainerID="58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" Namespace="calico-system" Pod="csi-node-driver-x8bpc" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0" Jan 24 00:35:47.866868 containerd[1677]: 2026-01-24 00:35:47.841 [INFO][4339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7d8372b696 ContainerID="58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" Namespace="calico-system" Pod="csi-node-driver-x8bpc" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0" Jan 24 00:35:47.866868 containerd[1677]: 2026-01-24 00:35:47.845 [INFO][4339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" Namespace="calico-system" Pod="csi-node-driver-x8bpc" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0" Jan 24 00:35:47.867157 containerd[1677]: 2026-01-24 00:35:47.853 [INFO][4339] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" Namespace="calico-system" Pod="csi-node-driver-x8bpc" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe469055-bc9a-468d-9724-6bf26a67fb3d", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37", Pod:"csi-node-driver-x8bpc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7d8372b696", MAC:"ba:5b:30:c7:7a:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:47.868045 containerd[1677]: 2026-01-24 00:35:47.862 [INFO][4339] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" Namespace="calico-system" Pod="csi-node-driver-x8bpc" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-csi--node--driver--x8bpc-eth0" Jan 24 00:35:47.868000 audit[4401]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:47.872650 kernel: kauditd_printk_skb: 256 callbacks suppressed Jan 24 00:35:47.872713 kernel: audit: type=1325 audit(1769214947.868:658): table=filter:126 family=2 entries=20 op=nft_register_rule pid=4401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:47.868000 audit[4401]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe5ca78f20 a2=0 a3=7ffe5ca78f0c items=0 ppid=3004 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.875632 kernel: audit: type=1300 audit(1769214947.868:658): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe5ca78f20 a2=0 a3=7ffe5ca78f0c items=0 ppid=3004 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:47.880581 kernel: audit: type=1327 audit(1769214947.868:658): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:47.875000 audit[4401]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:47.875000 audit[4401]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe5ca78f20 a2=0 a3=0 items=0 ppid=3004 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.884860 kernel: audit: type=1325 audit(1769214947.875:659): table=nat:127 family=2 entries=14 op=nft_register_rule pid=4401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:47.884909 kernel: audit: type=1300 audit(1769214947.875:659): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe5ca78f20 a2=0 a3=0 items=0 ppid=3004 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.875000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:47.892230 kernel: audit: type=1327 audit(1769214947.875:659): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:47.900000 audit[4407]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4407 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:47.905301 kernel: audit: type=1325 audit(1769214947.900:660): table=filter:128 family=2 entries=40 op=nft_register_chain pid=4407 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:47.900000 audit[4407]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffda624a4a0 a2=0 a3=7ffda624a48c items=0 ppid=3975 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.910935 kernel: audit: type=1300 audit(1769214947.900:660): arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffda624a4a0 a2=0 a3=7ffda624a48c items=0 ppid=3975 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.900000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:47.915231 kernel: audit: type=1327 audit(1769214947.900:660): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:47.919764 containerd[1677]: time="2026-01-24T00:35:47.919731319Z" level=info msg="connecting to shim 58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37" address="unix:///run/containerd/s/ea40bee35ebaf9df043d64431d89cde98eb7257686fd99891dbabae01ab00433" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:47.939632 systemd-networkd[1573]: cali59661adc6bc: Link UP Jan 24 00:35:47.939800 systemd-networkd[1573]: cali59661adc6bc: Gained carrier Jan 24 00:35:47.954550 containerd[1677]: 2026-01-24 00:35:47.746 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0 coredns-668d6bf9bc- kube-system 09588ba1-4540-4c39-997a-6cc3fbc4c31f 797 0 2026-01-24 00:35:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4593-0-0-7-bbab233dcd coredns-668d6bf9bc-f49pv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali59661adc6bc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-f49pv" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-" Jan 24 00:35:47.954550 containerd[1677]: 2026-01-24 00:35:47.746 [INFO][4355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-f49pv" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0" Jan 24 00:35:47.954550 containerd[1677]: 2026-01-24 00:35:47.801 [INFO][4384] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" HandleID="k8s-pod-network.32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" Workload="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0" Jan 24 00:35:47.954830 containerd[1677]: 2026-01-24 00:35:47.802 [INFO][4384] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" HandleID="k8s-pod-network.32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" Workload="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4593-0-0-7-bbab233dcd", "pod":"coredns-668d6bf9bc-f49pv", "timestamp":"2026-01-24 00:35:47.80198271 +0000 UTC"}, Hostname:"ci-4593-0-0-7-bbab233dcd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:35:47.954830 containerd[1677]: 2026-01-24 00:35:47.802 [INFO][4384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:35:47.954830 containerd[1677]: 2026-01-24 00:35:47.823 [INFO][4384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:35:47.954830 containerd[1677]: 2026-01-24 00:35:47.829 [INFO][4384] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-7-bbab233dcd' Jan 24 00:35:47.954830 containerd[1677]: 2026-01-24 00:35:47.881 [INFO][4384] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.954830 containerd[1677]: 2026-01-24 00:35:47.889 [INFO][4384] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.954830 containerd[1677]: 2026-01-24 00:35:47.899 [INFO][4384] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.954830 containerd[1677]: 2026-01-24 00:35:47.902 [INFO][4384] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.954830 containerd[1677]: 2026-01-24 00:35:47.906 [INFO][4384] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.955016 containerd[1677]: 2026-01-24 00:35:47.906 [INFO][4384] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.955016 containerd[1677]: 2026-01-24 00:35:47.908 [INFO][4384] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc Jan 24 00:35:47.955016 containerd[1677]: 2026-01-24 00:35:47.915 [INFO][4384] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.955016 containerd[1677]: 2026-01-24 00:35:47.922 [INFO][4384] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.132/26] block=192.168.23.128/26 handle="k8s-pod-network.32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.955016 containerd[1677]: 2026-01-24 00:35:47.922 [INFO][4384] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.132/26] handle="k8s-pod-network.32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:47.955016 containerd[1677]: 2026-01-24 00:35:47.923 [INFO][4384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:35:47.955016 containerd[1677]: 2026-01-24 00:35:47.923 [INFO][4384] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.132/26] IPv6=[] ContainerID="32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" HandleID="k8s-pod-network.32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" Workload="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0" Jan 24 00:35:47.955589 containerd[1677]: 2026-01-24 00:35:47.925 [INFO][4355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-f49pv" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"09588ba1-4540-4c39-997a-6cc3fbc4c31f", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"", Pod:"coredns-668d6bf9bc-f49pv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59661adc6bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:47.955589 containerd[1677]: 2026-01-24 00:35:47.925 [INFO][4355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.132/32] ContainerID="32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-f49pv" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0" Jan 24 00:35:47.955589 containerd[1677]: 2026-01-24 00:35:47.925 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59661adc6bc ContainerID="32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-f49pv" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0" Jan 24 00:35:47.955589 containerd[1677]: 2026-01-24 00:35:47.934 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-f49pv" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0" Jan 24 00:35:47.955589 containerd[1677]: 2026-01-24 00:35:47.934 [INFO][4355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-f49pv" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"09588ba1-4540-4c39-997a-6cc3fbc4c31f", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc", Pod:"coredns-668d6bf9bc-f49pv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59661adc6bc", MAC:"22:8e:95:eb:66:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:47.955589 containerd[1677]: 2026-01-24 00:35:47.950 [INFO][4355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-f49pv" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--f49pv-eth0" Jan 24 00:35:47.966535 systemd[1]: Started cri-containerd-58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37.scope - libcontainer container 58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37. Jan 24 00:35:47.971000 audit[4453]: NETFILTER_CFG table=filter:129 family=2 entries=50 op=nft_register_chain pid=4453 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:47.971000 audit[4453]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7ffc747e8390 a2=0 a3=7ffc747e837c items=0 ppid=3975 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.976392 kernel: audit: type=1325 audit(1769214947.971:661): table=filter:129 family=2 entries=50 op=nft_register_chain pid=4453 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:47.971000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:47.986000 audit: BPF prog-id=214 op=LOAD Jan 24 00:35:47.988000 audit: BPF prog-id=215 op=LOAD Jan 24 00:35:47.988000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4417 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538633161353363626461313663633833323031663066383565356664 Jan 24 00:35:47.989000 audit: BPF prog-id=215 op=UNLOAD Jan 24 00:35:47.989000 audit[4431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4417 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538633161353363626461313663633833323031663066383565356664 Jan 24 00:35:47.989000 audit: BPF prog-id=216 op=LOAD Jan 24 00:35:47.989000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4417 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538633161353363626461313663633833323031663066383565356664 Jan 24 00:35:47.989000 audit: BPF prog-id=217 op=LOAD Jan 24 00:35:47.989000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4417 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538633161353363626461313663633833323031663066383565356664 Jan 24 00:35:47.989000 audit: BPF prog-id=217 op=UNLOAD Jan 24 00:35:47.989000 audit[4431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4417 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538633161353363626461313663633833323031663066383565356664 Jan 24 00:35:47.989000 audit: BPF prog-id=216 op=UNLOAD Jan 24 00:35:47.989000 audit[4431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4417 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538633161353363626461313663633833323031663066383565356664 Jan 24 00:35:47.989000 audit: BPF prog-id=218 op=LOAD Jan 24 00:35:47.989000 audit[4431]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4417 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:47.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538633161353363626461313663633833323031663066383565356664 Jan 24 00:35:47.995276 containerd[1677]: time="2026-01-24T00:35:47.995110879Z" level=info msg="connecting to shim 32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc" address="unix:///run/containerd/s/b3f205ed1470dcd3327f434d69b14df70b928ab649d80a971e85eff5f189abff" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:48.019205 containerd[1677]: time="2026-01-24T00:35:48.019172977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8bpc,Uid:fe469055-bc9a-468d-9724-6bf26a67fb3d,Namespace:calico-system,Attempt:0,} returns sandbox id \"58c1a53cbda16cc83201f0f85e5fdc7603893821cae412d22d595d0b22e45b37\"" Jan 24 00:35:48.022263 containerd[1677]: time="2026-01-24T00:35:48.022198237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:35:48.035382 systemd[1]: Started cri-containerd-32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc.scope - libcontainer container 32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc. Jan 24 00:35:48.042359 systemd-networkd[1573]: calibf17c8dd7ee: Link UP Jan 24 00:35:48.042519 systemd-networkd[1573]: calibf17c8dd7ee: Gained carrier Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:47.721 [INFO][4347] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0 coredns-668d6bf9bc- kube-system 921dd561-510c-4c7e-86db-9c24acbd795a 788 0 2026-01-24 00:35:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4593-0-0-7-bbab233dcd coredns-668d6bf9bc-bqxrd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibf17c8dd7ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqxrd" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:47.721 [INFO][4347] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqxrd" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:47.804 [INFO][4390] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" HandleID="k8s-pod-network.6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" Workload="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:47.805 [INFO][4390] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" HandleID="k8s-pod-network.6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" Workload="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5c10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4593-0-0-7-bbab233dcd", "pod":"coredns-668d6bf9bc-bqxrd", "timestamp":"2026-01-24 00:35:47.804915073 +0000 UTC"}, Hostname:"ci-4593-0-0-7-bbab233dcd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:47.805 [INFO][4390] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:47.923 [INFO][4390] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:47.923 [INFO][4390] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-7-bbab233dcd' Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:47.984 [INFO][4390] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:47.993 [INFO][4390] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:48.000 [INFO][4390] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:48.003 [INFO][4390] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:48.007 [INFO][4390] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:48.007 [INFO][4390] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:48.009 [INFO][4390] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246 Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:48.013 [INFO][4390] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:48.028 [INFO][4390] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.133/26] block=192.168.23.128/26 handle="k8s-pod-network.6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:48.028 [INFO][4390] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.133/26] handle="k8s-pod-network.6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:48.028 [INFO][4390] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:35:48.059109 containerd[1677]: 2026-01-24 00:35:48.028 [INFO][4390] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.133/26] IPv6=[] ContainerID="6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" HandleID="k8s-pod-network.6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" Workload="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0" Jan 24 00:35:48.059746 containerd[1677]: 2026-01-24 00:35:48.032 [INFO][4347] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqxrd" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"921dd561-510c-4c7e-86db-9c24acbd795a", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"", Pod:"coredns-668d6bf9bc-bqxrd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf17c8dd7ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:48.059746 containerd[1677]: 2026-01-24 00:35:48.032 [INFO][4347] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.133/32] ContainerID="6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqxrd" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0" Jan 24 00:35:48.059746 containerd[1677]: 2026-01-24 00:35:48.032 [INFO][4347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf17c8dd7ee ContainerID="6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqxrd" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0" Jan 24 00:35:48.059746 containerd[1677]: 2026-01-24 00:35:48.041 [INFO][4347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqxrd" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0" Jan 24 00:35:48.059746 containerd[1677]: 2026-01-24 00:35:48.042 [INFO][4347] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqxrd" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"921dd561-510c-4c7e-86db-9c24acbd795a", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246", Pod:"coredns-668d6bf9bc-bqxrd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf17c8dd7ee", MAC:"16:27:fe:93:27:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:48.059746 containerd[1677]: 2026-01-24 00:35:48.056 [INFO][4347] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqxrd" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-coredns--668d6bf9bc--bqxrd-eth0" Jan 24 00:35:48.063000 audit: BPF prog-id=219 op=LOAD Jan 24 00:35:48.063000 audit: BPF prog-id=220 op=LOAD Jan 24 00:35:48.063000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4469 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332646431643830653963613639653233316639653336653033646139 Jan 24 00:35:48.064000 audit: BPF prog-id=220 op=UNLOAD Jan 24 00:35:48.064000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4469 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332646431643830653963613639653233316639653336653033646139 Jan 24 00:35:48.064000 audit: BPF prog-id=221 op=LOAD Jan 24 00:35:48.064000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4469 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332646431643830653963613639653233316639653336653033646139 Jan 24 00:35:48.064000 audit: BPF prog-id=222 op=LOAD Jan 24 00:35:48.064000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4469 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332646431643830653963613639653233316639653336653033646139 Jan 24 00:35:48.064000 audit: BPF prog-id=222 op=UNLOAD Jan 24 00:35:48.064000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4469 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332646431643830653963613639653233316639653336653033646139 Jan 24 00:35:48.064000 audit: BPF prog-id=221 op=UNLOAD Jan 24 00:35:48.064000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4469 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332646431643830653963613639653233316639653336653033646139 Jan 24 00:35:48.064000 audit: BPF prog-id=223 op=LOAD Jan 24 00:35:48.064000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4469 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332646431643830653963613639653233316639653336653033646139 Jan 24 00:35:48.080000 audit[4516]: NETFILTER_CFG table=filter:130 family=2 entries=50 op=nft_register_chain pid=4516 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:48.080000 audit[4516]: SYSCALL arch=c000003e syscall=46 success=yes exit=24384 a0=3 a1=7ffde71b0830 a2=0 a3=7ffde71b081c items=0 ppid=3975 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.080000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:48.092229 containerd[1677]: time="2026-01-24T00:35:48.092179822Z" level=info msg="connecting to shim 6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246" address="unix:///run/containerd/s/955b66e01ff6a2ce81221ed5073d4f64d9fe683be83d7eaacc762f0dcc2cf0dd" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:48.109607 containerd[1677]: time="2026-01-24T00:35:48.109529928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f49pv,Uid:09588ba1-4540-4c39-997a-6cc3fbc4c31f,Namespace:kube-system,Attempt:0,} returns sandbox id \"32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc\"" Jan 24 00:35:48.115817 containerd[1677]: time="2026-01-24T00:35:48.115705652Z" level=info msg="CreateContainer within sandbox \"32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:35:48.117399 systemd[1]: Started cri-containerd-6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246.scope - libcontainer container 6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246. Jan 24 00:35:48.128532 containerd[1677]: time="2026-01-24T00:35:48.128474891Z" level=info msg="Container 1e3d028dd89a8657899f558aa9755fdf4c2e0da32e1a80d72f592d784c493e22: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:35:48.129000 audit: BPF prog-id=224 op=LOAD Jan 24 00:35:48.129000 audit: BPF prog-id=225 op=LOAD Jan 24 00:35:48.129000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4526 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323632373431626238623237306235323166333264666336353431 Jan 24 00:35:48.129000 audit: BPF prog-id=225 op=UNLOAD Jan 24 00:35:48.129000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323632373431626238623237306235323166333264666336353431 Jan 24 00:35:48.129000 audit: BPF prog-id=226 op=LOAD Jan 24 00:35:48.129000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4526 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323632373431626238623237306235323166333264666336353431 Jan 24 00:35:48.129000 audit: BPF prog-id=227 op=LOAD Jan 24 00:35:48.129000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4526 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323632373431626238623237306235323166333264666336353431 Jan 24 00:35:48.129000 audit: BPF prog-id=227 op=UNLOAD Jan 24 00:35:48.129000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323632373431626238623237306235323166333264666336353431 Jan 24 00:35:48.129000 audit: BPF prog-id=226 op=UNLOAD Jan 24 00:35:48.129000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323632373431626238623237306235323166333264666336353431 Jan 24 00:35:48.129000 audit: BPF prog-id=228 op=LOAD Jan 24 00:35:48.129000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4526 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323632373431626238623237306235323166333264666336353431 Jan 24 00:35:48.136144 containerd[1677]: time="2026-01-24T00:35:48.136088931Z" level=info msg="CreateContainer within sandbox \"32dd1d80e9ca69e231f9e36e03da9056ee23016e688ea3a27fccc2229009b5cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e3d028dd89a8657899f558aa9755fdf4c2e0da32e1a80d72f592d784c493e22\"" Jan 24 00:35:48.136884 containerd[1677]: time="2026-01-24T00:35:48.136866437Z" level=info msg="StartContainer for \"1e3d028dd89a8657899f558aa9755fdf4c2e0da32e1a80d72f592d784c493e22\"" Jan 24 00:35:48.137757 containerd[1677]: time="2026-01-24T00:35:48.137678502Z" level=info msg="connecting to shim 1e3d028dd89a8657899f558aa9755fdf4c2e0da32e1a80d72f592d784c493e22" address="unix:///run/containerd/s/b3f205ed1470dcd3327f434d69b14df70b928ab649d80a971e85eff5f189abff" protocol=ttrpc version=3 Jan 24 00:35:48.165540 systemd[1]: Started cri-containerd-1e3d028dd89a8657899f558aa9755fdf4c2e0da32e1a80d72f592d784c493e22.scope - libcontainer container 1e3d028dd89a8657899f558aa9755fdf4c2e0da32e1a80d72f592d784c493e22. Jan 24 00:35:48.178000 audit: BPF prog-id=229 op=LOAD Jan 24 00:35:48.179000 audit: BPF prog-id=230 op=LOAD Jan 24 00:35:48.179000 audit[4560]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=4469 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165336430323864643839613836353738393966353538616139373535 Jan 24 00:35:48.180000 audit: BPF prog-id=230 op=UNLOAD Jan 24 00:35:48.180000 audit[4560]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4469 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165336430323864643839613836353738393966353538616139373535 Jan 24 00:35:48.180000 audit: BPF prog-id=231 op=LOAD Jan 24 00:35:48.180000 audit[4560]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=4469 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165336430323864643839613836353738393966353538616139373535 Jan 24 00:35:48.180000 audit: BPF prog-id=232 op=LOAD Jan 24 00:35:48.180000 audit[4560]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=4469 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165336430323864643839613836353738393966353538616139373535 Jan 24 00:35:48.180000 audit: BPF prog-id=232 op=UNLOAD Jan 24 00:35:48.180000 audit[4560]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4469 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165336430323864643839613836353738393966353538616139373535 Jan 24 00:35:48.180000 audit: BPF prog-id=231 op=UNLOAD Jan 24 00:35:48.180000 audit[4560]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4469 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165336430323864643839613836353738393966353538616139373535 Jan 24 00:35:48.180000 audit: BPF prog-id=233 op=LOAD Jan 24 00:35:48.180000 audit[4560]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=4469 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165336430323864643839613836353738393966353538616139373535 Jan 24 00:35:48.192583 containerd[1677]: time="2026-01-24T00:35:48.192491140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bqxrd,Uid:921dd561-510c-4c7e-86db-9c24acbd795a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246\"" Jan 24 00:35:48.195679 containerd[1677]: time="2026-01-24T00:35:48.195621699Z" level=info msg="CreateContainer within sandbox \"6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:35:48.204781 containerd[1677]: time="2026-01-24T00:35:48.204727652Z" level=info msg="StartContainer for \"1e3d028dd89a8657899f558aa9755fdf4c2e0da32e1a80d72f592d784c493e22\" returns successfully" Jan 24 00:35:48.209225 containerd[1677]: time="2026-01-24T00:35:48.209102136Z" level=info msg="Container 8c956231184a4366aa1c5a7cad4295fd5eb18f18d05cbd36a17561d371b9572c: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:35:48.216025 containerd[1677]: time="2026-01-24T00:35:48.215975891Z" level=info msg="CreateContainer within sandbox \"6c262741bb8b270b521f32dfc6541fa9d54b940a63e4fd475c86e72cde408246\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c956231184a4366aa1c5a7cad4295fd5eb18f18d05cbd36a17561d371b9572c\"" Jan 24 00:35:48.217151 containerd[1677]: time="2026-01-24T00:35:48.216719763Z" level=info msg="StartContainer for \"8c956231184a4366aa1c5a7cad4295fd5eb18f18d05cbd36a17561d371b9572c\"" Jan 24 00:35:48.218089 containerd[1677]: time="2026-01-24T00:35:48.218031531Z" level=info msg="connecting to shim 8c956231184a4366aa1c5a7cad4295fd5eb18f18d05cbd36a17561d371b9572c" address="unix:///run/containerd/s/955b66e01ff6a2ce81221ed5073d4f64d9fe683be83d7eaacc762f0dcc2cf0dd" protocol=ttrpc version=3 Jan 24 00:35:48.239326 systemd[1]: Started cri-containerd-8c956231184a4366aa1c5a7cad4295fd5eb18f18d05cbd36a17561d371b9572c.scope - libcontainer container 8c956231184a4366aa1c5a7cad4295fd5eb18f18d05cbd36a17561d371b9572c. Jan 24 00:35:48.255000 audit: BPF prog-id=234 op=LOAD Jan 24 00:35:48.256000 audit: BPF prog-id=235 op=LOAD Jan 24 00:35:48.256000 audit[4600]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4526 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393536323331313834613433363661613163356137636164343239 Jan 24 00:35:48.256000 audit: BPF prog-id=235 op=UNLOAD Jan 24 00:35:48.256000 audit[4600]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393536323331313834613433363661613163356137636164343239 Jan 24 00:35:48.256000 audit: BPF prog-id=236 op=LOAD Jan 24 00:35:48.256000 audit[4600]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4526 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393536323331313834613433363661613163356137636164343239 Jan 24 00:35:48.256000 audit: BPF prog-id=237 op=LOAD Jan 24 00:35:48.256000 audit[4600]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4526 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393536323331313834613433363661613163356137636164343239 Jan 24 00:35:48.256000 audit: BPF prog-id=237 op=UNLOAD Jan 24 00:35:48.256000 audit[4600]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393536323331313834613433363661613163356137636164343239 Jan 24 00:35:48.256000 audit: BPF prog-id=236 op=UNLOAD Jan 24 00:35:48.256000 audit[4600]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393536323331313834613433363661613163356137636164343239 Jan 24 00:35:48.256000 audit: BPF prog-id=238 op=LOAD Jan 24 00:35:48.256000 audit[4600]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4526 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393536323331313834613433363661613163356137636164343239 Jan 24 00:35:48.274599 containerd[1677]: time="2026-01-24T00:35:48.274553672Z" level=info msg="StartContainer for \"8c956231184a4366aa1c5a7cad4295fd5eb18f18d05cbd36a17561d371b9572c\" returns successfully" Jan 24 00:35:48.338149 containerd[1677]: time="2026-01-24T00:35:48.338084301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:35:48.339746 containerd[1677]: time="2026-01-24T00:35:48.339708409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:35:48.339788 containerd[1677]: time="2026-01-24T00:35:48.339767533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:35:48.340006 kubelet[2887]: E0124 00:35:48.339962 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:35:48.340426 kubelet[2887]: E0124 00:35:48.340006 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:35:48.340426 kubelet[2887]: E0124 00:35:48.340110 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:48.342256 containerd[1677]: time="2026-01-24T00:35:48.342234887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:35:48.656792 containerd[1677]: time="2026-01-24T00:35:48.656720655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7c7f79fb-9wsxh,Uid:bd7b47a3-e9a9-4695-b299-4e30c1f99caf,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:35:48.687888 containerd[1677]: time="2026-01-24T00:35:48.687846009Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:35:48.689515 containerd[1677]: time="2026-01-24T00:35:48.689319869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:35:48.689515 containerd[1677]: time="2026-01-24T00:35:48.689458585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:35:48.690836 kubelet[2887]: E0124 00:35:48.690281 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:35:48.690836 kubelet[2887]: E0124 00:35:48.690376 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:35:48.690836 kubelet[2887]: E0124 00:35:48.690519 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:48.693189 kubelet[2887]: E0124 00:35:48.693114 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:35:48.817555 systemd-networkd[1573]: cali83eebfbc646: Link UP Jan 24 00:35:48.818258 systemd-networkd[1573]: cali83eebfbc646: Gained carrier Jan 24 00:35:48.834144 kubelet[2887]: E0124 00:35:48.834034 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.730 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0 calico-apiserver-7b7c7f79fb- calico-apiserver bd7b47a3-e9a9-4695-b299-4e30c1f99caf 798 0 2026-01-24 00:35:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b7c7f79fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4593-0-0-7-bbab233dcd calico-apiserver-7b7c7f79fb-9wsxh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali83eebfbc646 [] [] }} ContainerID="969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-9wsxh" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.730 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-9wsxh" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.772 [INFO][4648] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" HandleID="k8s-pod-network.969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" Workload="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.772 [INFO][4648] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" HandleID="k8s-pod-network.969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" Workload="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4593-0-0-7-bbab233dcd", "pod":"calico-apiserver-7b7c7f79fb-9wsxh", "timestamp":"2026-01-24 00:35:48.772510287 +0000 UTC"}, Hostname:"ci-4593-0-0-7-bbab233dcd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.772 [INFO][4648] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.772 [INFO][4648] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.772 [INFO][4648] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-7-bbab233dcd' Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.784 [INFO][4648] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.789 [INFO][4648] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.794 [INFO][4648] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.796 [INFO][4648] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.799 [INFO][4648] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.799 [INFO][4648] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.800 [INFO][4648] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5 Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.805 [INFO][4648] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.812 [INFO][4648] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.134/26] block=192.168.23.128/26 handle="k8s-pod-network.969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.812 [INFO][4648] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.134/26] handle="k8s-pod-network.969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.812 [INFO][4648] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:35:48.839100 containerd[1677]: 2026-01-24 00:35:48.812 [INFO][4648] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.134/26] IPv6=[] ContainerID="969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" HandleID="k8s-pod-network.969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" Workload="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0" Jan 24 00:35:48.842343 containerd[1677]: 2026-01-24 00:35:48.814 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-9wsxh" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0", GenerateName:"calico-apiserver-7b7c7f79fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd7b47a3-e9a9-4695-b299-4e30c1f99caf", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b7c7f79fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"", Pod:"calico-apiserver-7b7c7f79fb-9wsxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83eebfbc646", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:48.842343 containerd[1677]: 2026-01-24 00:35:48.814 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.134/32] ContainerID="969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-9wsxh" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0" Jan 24 00:35:48.842343 containerd[1677]: 2026-01-24 00:35:48.814 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83eebfbc646 ContainerID="969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-9wsxh" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0" Jan 24 00:35:48.842343 containerd[1677]: 2026-01-24 00:35:48.818 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-9wsxh" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0" Jan 24 00:35:48.842343 containerd[1677]: 2026-01-24 00:35:48.819 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-9wsxh" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0", GenerateName:"calico-apiserver-7b7c7f79fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd7b47a3-e9a9-4695-b299-4e30c1f99caf", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b7c7f79fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5", Pod:"calico-apiserver-7b7c7f79fb-9wsxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83eebfbc646", MAC:"0a:7c:af:59:b7:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:48.842343 containerd[1677]: 2026-01-24 00:35:48.833 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-9wsxh" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--9wsxh-eth0" Jan 24 00:35:48.845833 kubelet[2887]: E0124 00:35:48.845555 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:35:48.874000 audit[4661]: NETFILTER_CFG table=filter:131 family=2 entries=62 op=nft_register_chain pid=4661 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:48.876713 containerd[1677]: time="2026-01-24T00:35:48.876680407Z" level=info msg="connecting to shim 969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5" address="unix:///run/containerd/s/105d258b73932e1e3fe079a940609a0f8936df8251872eb56b7769019f66e5cc" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:48.874000 audit[4661]: SYSCALL arch=c000003e syscall=46 success=yes exit=31756 a0=3 a1=7ffd5f35ef70 a2=0 a3=7ffd5f35ef5c items=0 ppid=3975 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.874000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:48.893196 kubelet[2887]: I0124 00:35:48.892877 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bqxrd" podStartSLOduration=36.892859603 podStartE2EDuration="36.892859603s" podCreationTimestamp="2026-01-24 00:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:35:48.892853955 +0000 UTC m=+43.313785283" watchObservedRunningTime="2026-01-24 00:35:48.892859603 +0000 UTC m=+43.313790931" Jan 24 00:35:48.915642 systemd[1]: Started cri-containerd-969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5.scope - libcontainer container 969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5. Jan 24 00:35:48.919647 kubelet[2887]: I0124 00:35:48.919607 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f49pv" podStartSLOduration=36.919589431 podStartE2EDuration="36.919589431s" podCreationTimestamp="2026-01-24 00:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:35:48.917488652 +0000 UTC m=+43.338419981" watchObservedRunningTime="2026-01-24 00:35:48.919589431 +0000 UTC m=+43.340520786" Jan 24 00:35:48.929000 audit[4696]: NETFILTER_CFG table=filter:132 family=2 entries=17 op=nft_register_rule pid=4696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:48.929000 audit[4696]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc59dfde50 a2=0 a3=7ffc59dfde3c items=0 ppid=3004 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.929000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:48.934000 audit[4696]: NETFILTER_CFG table=nat:133 family=2 entries=35 op=nft_register_chain pid=4696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:48.934000 audit[4696]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc59dfde50 a2=0 a3=7ffc59dfde3c items=0 ppid=3004 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:48.939000 audit: BPF prog-id=239 op=LOAD Jan 24 00:35:48.940000 audit: BPF prog-id=240 op=LOAD Jan 24 00:35:48.940000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393832316566373061373065363831393765323338386462313737 Jan 24 00:35:48.940000 audit: BPF prog-id=240 op=UNLOAD Jan 24 00:35:48.940000 audit[4681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393832316566373061373065363831393765323338386462313737 Jan 24 00:35:48.940000 audit: BPF prog-id=241 op=LOAD Jan 24 00:35:48.940000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393832316566373061373065363831393765323338386462313737 Jan 24 00:35:48.940000 audit: BPF prog-id=242 op=LOAD Jan 24 00:35:48.940000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393832316566373061373065363831393765323338386462313737 Jan 24 00:35:48.940000 audit: BPF prog-id=242 op=UNLOAD Jan 24 00:35:48.940000 audit[4681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393832316566373061373065363831393765323338386462313737 Jan 24 00:35:48.940000 audit: BPF prog-id=241 op=UNLOAD Jan 24 00:35:48.940000 audit[4681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393832316566373061373065363831393765323338386462313737 Jan 24 00:35:48.940000 audit: BPF prog-id=243 op=LOAD Jan 24 00:35:48.940000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393832316566373061373065363831393765323338386462313737 Jan 24 00:35:48.952000 audit[4704]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=4704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:48.952000 audit[4704]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff6c3b2660 a2=0 a3=7fff6c3b264c items=0 ppid=3004 pid=4704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:48.969000 audit[4704]: NETFILTER_CFG table=nat:135 family=2 entries=56 op=nft_register_chain pid=4704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:48.969000 audit[4704]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff6c3b2660 a2=0 a3=7fff6c3b264c items=0 ppid=3004 pid=4704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:48.969000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:48.973677 systemd-networkd[1573]: calif7d8372b696: Gained IPv6LL Jan 24 00:35:48.993834 containerd[1677]: time="2026-01-24T00:35:48.993784604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7c7f79fb-9wsxh,Uid:bd7b47a3-e9a9-4695-b299-4e30c1f99caf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"969821ef70a70e68197e2388db17777705502d0b5a6ef06f80f49968a9aaf2f5\"" Jan 24 00:35:48.997252 containerd[1677]: time="2026-01-24T00:35:48.997079443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:35:49.326986 containerd[1677]: time="2026-01-24T00:35:49.326920331Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:35:49.329202 containerd[1677]: time="2026-01-24T00:35:49.329140852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:35:49.329314 containerd[1677]: time="2026-01-24T00:35:49.329275037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:35:49.329495 kubelet[2887]: E0124 00:35:49.329463 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:35:49.329557 kubelet[2887]: E0124 00:35:49.329508 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:35:49.329694 kubelet[2887]: E0124 00:35:49.329652 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b7c7f79fb-9wsxh_calico-apiserver(bd7b47a3-e9a9-4695-b299-4e30c1f99caf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:49.331076 kubelet[2887]: E0124 00:35:49.331043 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:35:49.421433 systemd-networkd[1573]: calibf17c8dd7ee: Gained IPv6LL Jan 24 00:35:49.655661 containerd[1677]: time="2026-01-24T00:35:49.655491695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7c7f79fb-x9lmq,Uid:c22b1229-18a4-4f62-9e22-d2ed4a3840d1,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:35:49.656567 containerd[1677]: time="2026-01-24T00:35:49.655990757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cbbc8dcd-w5nn4,Uid:f695da60-5d07-4b6c-8f24-e49612d3b40f,Namespace:calico-system,Attempt:0,}" Jan 24 00:35:49.678039 systemd-networkd[1573]: cali59661adc6bc: Gained IPv6LL Jan 24 00:35:49.803476 systemd-networkd[1573]: cali6e54ed2544a: Link UP Jan 24 00:35:49.804848 systemd-networkd[1573]: cali6e54ed2544a: Gained carrier Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.719 [INFO][4717] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0 calico-kube-controllers-64cbbc8dcd- calico-system f695da60-5d07-4b6c-8f24-e49612d3b40f 796 0 2026-01-24 00:35:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64cbbc8dcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4593-0-0-7-bbab233dcd calico-kube-controllers-64cbbc8dcd-w5nn4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6e54ed2544a [] [] }} ContainerID="e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" Namespace="calico-system" Pod="calico-kube-controllers-64cbbc8dcd-w5nn4" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.719 [INFO][4717] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" Namespace="calico-system" Pod="calico-kube-controllers-64cbbc8dcd-w5nn4" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.759 [INFO][4737] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" HandleID="k8s-pod-network.e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" Workload="ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.759 [INFO][4737] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" HandleID="k8s-pod-network.e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" Workload="ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b3150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593-0-0-7-bbab233dcd", "pod":"calico-kube-controllers-64cbbc8dcd-w5nn4", "timestamp":"2026-01-24 00:35:49.759052366 +0000 UTC"}, Hostname:"ci-4593-0-0-7-bbab233dcd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.759 [INFO][4737] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.759 [INFO][4737] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.759 [INFO][4737] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-7-bbab233dcd' Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.768 [INFO][4737] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.772 [INFO][4737] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.777 [INFO][4737] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.778 [INFO][4737] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.781 [INFO][4737] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.781 [INFO][4737] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.782 [INFO][4737] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.786 [INFO][4737] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.794 [INFO][4737] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.135/26] block=192.168.23.128/26 handle="k8s-pod-network.e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.794 [INFO][4737] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.135/26] handle="k8s-pod-network.e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.794 [INFO][4737] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:35:49.820512 containerd[1677]: 2026-01-24 00:35:49.794 [INFO][4737] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.135/26] IPv6=[] ContainerID="e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" HandleID="k8s-pod-network.e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" Workload="ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0" Jan 24 00:35:49.822290 containerd[1677]: 2026-01-24 00:35:49.797 [INFO][4717] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" Namespace="calico-system" Pod="calico-kube-controllers-64cbbc8dcd-w5nn4" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0", GenerateName:"calico-kube-controllers-64cbbc8dcd-", Namespace:"calico-system", SelfLink:"", UID:"f695da60-5d07-4b6c-8f24-e49612d3b40f", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cbbc8dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"", Pod:"calico-kube-controllers-64cbbc8dcd-w5nn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6e54ed2544a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:49.822290 containerd[1677]: 2026-01-24 00:35:49.797 [INFO][4717] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.135/32] ContainerID="e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" Namespace="calico-system" Pod="calico-kube-controllers-64cbbc8dcd-w5nn4" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0" Jan 24 00:35:49.822290 containerd[1677]: 2026-01-24 00:35:49.797 [INFO][4717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e54ed2544a ContainerID="e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" Namespace="calico-system" Pod="calico-kube-controllers-64cbbc8dcd-w5nn4" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0" Jan 24 00:35:49.822290 containerd[1677]: 2026-01-24 00:35:49.803 [INFO][4717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" Namespace="calico-system" Pod="calico-kube-controllers-64cbbc8dcd-w5nn4" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0" Jan 24 00:35:49.822290 containerd[1677]: 2026-01-24 00:35:49.804 [INFO][4717] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" Namespace="calico-system" Pod="calico-kube-controllers-64cbbc8dcd-w5nn4" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0", GenerateName:"calico-kube-controllers-64cbbc8dcd-", Namespace:"calico-system", SelfLink:"", UID:"f695da60-5d07-4b6c-8f24-e49612d3b40f", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cbbc8dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d", Pod:"calico-kube-controllers-64cbbc8dcd-w5nn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6e54ed2544a", MAC:"86:3b:68:95:4d:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:49.822290 containerd[1677]: 2026-01-24 00:35:49.818 [INFO][4717] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" Namespace="calico-system" Pod="calico-kube-controllers-64cbbc8dcd-w5nn4" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--kube--controllers--64cbbc8dcd--w5nn4-eth0" Jan 24 00:35:49.830000 audit[4759]: NETFILTER_CFG table=filter:136 family=2 entries=52 op=nft_register_chain pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:49.830000 audit[4759]: SYSCALL arch=c000003e syscall=46 success=yes exit=24312 a0=3 a1=7fff6bb2d140 a2=0 a3=7fff6bb2d12c items=0 ppid=3975 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:49.830000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:49.843744 containerd[1677]: time="2026-01-24T00:35:49.843685706Z" level=info msg="connecting to shim e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d" address="unix:///run/containerd/s/5d2d987c86aee04f5bc8364cbcf00598112ac7abdecb5f4e01c2b0c47431ebcb" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:49.851728 kubelet[2887]: E0124 00:35:49.851599 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:35:49.854557 kubelet[2887]: E0124 00:35:49.854301 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:35:49.883395 systemd[1]: Started cri-containerd-e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d.scope - libcontainer container e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d. Jan 24 00:35:49.901000 audit: BPF prog-id=244 op=LOAD Jan 24 00:35:49.902000 audit: BPF prog-id=245 op=LOAD Jan 24 00:35:49.902000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4767 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:49.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531353864653866653130643463346165643339396531396539366130 Jan 24 00:35:49.902000 audit: BPF prog-id=245 op=UNLOAD Jan 24 00:35:49.902000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4767 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:49.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531353864653866653130643463346165643339396531396539366130 Jan 24 00:35:49.902000 audit: BPF prog-id=246 op=LOAD Jan 24 00:35:49.902000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4767 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:49.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531353864653866653130643463346165643339396531396539366130 Jan 24 00:35:49.902000 audit: BPF prog-id=247 op=LOAD Jan 24 00:35:49.902000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4767 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:49.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531353864653866653130643463346165643339396531396539366130 Jan 24 00:35:49.902000 audit: BPF prog-id=247 op=UNLOAD Jan 24 00:35:49.902000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4767 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:49.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531353864653866653130643463346165643339396531396539366130 Jan 24 00:35:49.902000 audit: BPF prog-id=246 op=UNLOAD Jan 24 00:35:49.902000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4767 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:49.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531353864653866653130643463346165643339396531396539366130 Jan 24 00:35:49.902000 audit: BPF prog-id=248 op=LOAD Jan 24 00:35:49.902000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4767 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:49.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531353864653866653130643463346165643339396531396539366130 Jan 24 00:35:49.925204 systemd-networkd[1573]: cali1b97f5501cc: Link UP Jan 24 00:35:49.926582 systemd-networkd[1573]: cali1b97f5501cc: Gained carrier Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.729 [INFO][4712] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0 calico-apiserver-7b7c7f79fb- calico-apiserver c22b1229-18a4-4f62-9e22-d2ed4a3840d1 795 0 2026-01-24 00:35:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b7c7f79fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4593-0-0-7-bbab233dcd calico-apiserver-7b7c7f79fb-x9lmq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1b97f5501cc [] [] }} ContainerID="ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-x9lmq" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.730 [INFO][4712] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-x9lmq" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.764 [INFO][4742] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" HandleID="k8s-pod-network.ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" Workload="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.764 [INFO][4742] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" HandleID="k8s-pod-network.ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" Workload="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fa30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4593-0-0-7-bbab233dcd", "pod":"calico-apiserver-7b7c7f79fb-x9lmq", "timestamp":"2026-01-24 00:35:49.764078182 +0000 UTC"}, Hostname:"ci-4593-0-0-7-bbab233dcd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.764 [INFO][4742] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.794 [INFO][4742] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.794 [INFO][4742] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-7-bbab233dcd' Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.868 [INFO][4742] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.889 [INFO][4742] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.897 [INFO][4742] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.898 [INFO][4742] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.905 [INFO][4742] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.905 [INFO][4742] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.907 [INFO][4742] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24 Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.911 [INFO][4742] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.919 [INFO][4742] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.136/26] block=192.168.23.128/26 handle="k8s-pod-network.ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.919 [INFO][4742] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.136/26] handle="k8s-pod-network.ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" host="ci-4593-0-0-7-bbab233dcd" Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.919 [INFO][4742] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:35:49.944035 containerd[1677]: 2026-01-24 00:35:49.919 [INFO][4742] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.136/26] IPv6=[] ContainerID="ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" HandleID="k8s-pod-network.ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" Workload="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0" Jan 24 00:35:49.946317 containerd[1677]: 2026-01-24 00:35:49.921 [INFO][4712] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-x9lmq" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0", GenerateName:"calico-apiserver-7b7c7f79fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c22b1229-18a4-4f62-9e22-d2ed4a3840d1", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b7c7f79fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"", Pod:"calico-apiserver-7b7c7f79fb-x9lmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b97f5501cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:49.946317 containerd[1677]: 2026-01-24 00:35:49.921 [INFO][4712] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.136/32] ContainerID="ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-x9lmq" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0" Jan 24 00:35:49.946317 containerd[1677]: 2026-01-24 00:35:49.921 [INFO][4712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b97f5501cc ContainerID="ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-x9lmq" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0" Jan 24 00:35:49.946317 containerd[1677]: 2026-01-24 00:35:49.927 [INFO][4712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-x9lmq" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0" Jan 24 00:35:49.946317 containerd[1677]: 2026-01-24 00:35:49.928 [INFO][4712] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-x9lmq" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0", GenerateName:"calico-apiserver-7b7c7f79fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c22b1229-18a4-4f62-9e22-d2ed4a3840d1", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b7c7f79fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-7-bbab233dcd", ContainerID:"ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24", Pod:"calico-apiserver-7b7c7f79fb-x9lmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b97f5501cc", MAC:"2e:84:70:db:bf:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:35:49.946317 containerd[1677]: 2026-01-24 00:35:49.939 [INFO][4712] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" Namespace="calico-apiserver" Pod="calico-apiserver-7b7c7f79fb-x9lmq" WorkloadEndpoint="ci--4593--0--0--7--bbab233dcd-k8s-calico--apiserver--7b7c7f79fb--x9lmq-eth0" Jan 24 00:35:49.961547 containerd[1677]: time="2026-01-24T00:35:49.961503079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cbbc8dcd-w5nn4,Uid:f695da60-5d07-4b6c-8f24-e49612d3b40f,Namespace:calico-system,Attempt:0,} returns sandbox id \"e158de8fe10d4c4aed399e19e96a04298bf456ca4c7cfdbd74541bf2f262e93d\"" Jan 24 00:35:49.964401 containerd[1677]: time="2026-01-24T00:35:49.962646249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:35:49.966000 audit[4815]: NETFILTER_CFG table=filter:137 family=2 entries=57 op=nft_register_chain pid=4815 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:35:49.966000 audit[4815]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7ffd9cd34140 a2=0 a3=7ffd9cd3412c items=0 ppid=3975 pid=4815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:49.966000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:35:49.976227 containerd[1677]: time="2026-01-24T00:35:49.976119220Z" level=info msg="connecting to shim ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24" address="unix:///run/containerd/s/8e8ae4e426fa9468e948a08ee7bfe129f7debed04fb3c3b0ff8187702be2e11a" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:35:50.004393 systemd[1]: Started cri-containerd-ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24.scope - libcontainer container ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24. Jan 24 00:35:50.003000 audit[4848]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4848 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:50.003000 audit[4848]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe1f589900 a2=0 a3=7ffe1f5898ec items=0 ppid=3004 pid=4848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:50.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:50.008000 audit[4848]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=4848 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:50.008000 audit[4848]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe1f589900 a2=0 a3=7ffe1f5898ec items=0 ppid=3004 pid=4848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:50.008000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:50.013000 audit: BPF prog-id=249 op=LOAD Jan 24 00:35:50.014000 audit: BPF prog-id=250 op=LOAD Jan 24 00:35:50.014000 audit[4836]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4825 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:50.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163653332396234653735613831316566613836633930393337313134 Jan 24 00:35:50.014000 audit: BPF prog-id=250 op=UNLOAD Jan 24 00:35:50.014000 audit[4836]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4825 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:50.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163653332396234653735613831316566613836633930393337313134 Jan 24 00:35:50.014000 audit: BPF prog-id=251 op=LOAD Jan 24 00:35:50.014000 audit[4836]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4825 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:50.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163653332396234653735613831316566613836633930393337313134 Jan 24 00:35:50.014000 audit: BPF prog-id=252 op=LOAD Jan 24 00:35:50.014000 audit[4836]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4825 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:50.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163653332396234653735613831316566613836633930393337313134 Jan 24 00:35:50.014000 audit: BPF prog-id=252 op=UNLOAD Jan 24 00:35:50.014000 audit[4836]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4825 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:50.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163653332396234653735613831316566613836633930393337313134 Jan 24 00:35:50.014000 audit: BPF prog-id=251 op=UNLOAD Jan 24 00:35:50.014000 audit[4836]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4825 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:50.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163653332396234653735613831316566613836633930393337313134 Jan 24 00:35:50.014000 audit: BPF prog-id=253 op=LOAD Jan 24 00:35:50.014000 audit[4836]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4825 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:50.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163653332396234653735613831316566613836633930393337313134 Jan 24 00:35:50.057478 containerd[1677]: time="2026-01-24T00:35:50.057443563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b7c7f79fb-x9lmq,Uid:c22b1229-18a4-4f62-9e22-d2ed4a3840d1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ace329b4e75a811efa86c90937114388850a4d67d8402858c18d3c06d6608f24\"" Jan 24 00:35:50.294692 containerd[1677]: time="2026-01-24T00:35:50.294635067Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:35:50.296035 containerd[1677]: time="2026-01-24T00:35:50.295996262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:35:50.296197 containerd[1677]: time="2026-01-24T00:35:50.296059280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:35:50.296385 kubelet[2887]: E0124 00:35:50.296338 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:35:50.296436 kubelet[2887]: E0124 00:35:50.296383 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:35:50.296791 kubelet[2887]: E0124 00:35:50.296609 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dj67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64cbbc8dcd-w5nn4_calico-system(f695da60-5d07-4b6c-8f24-e49612d3b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:50.297712 containerd[1677]: time="2026-01-24T00:35:50.296926976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:35:50.298035 kubelet[2887]: E0124 00:35:50.298004 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:35:50.573377 systemd-networkd[1573]: cali83eebfbc646: Gained IPv6LL Jan 24 00:35:50.641861 containerd[1677]: time="2026-01-24T00:35:50.641648687Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:35:50.643629 containerd[1677]: time="2026-01-24T00:35:50.643523351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:35:50.643716 containerd[1677]: time="2026-01-24T00:35:50.643592073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:35:50.644060 kubelet[2887]: E0124 00:35:50.644000 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:35:50.644163 kubelet[2887]: E0124 00:35:50.644090 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:35:50.644548 kubelet[2887]: E0124 00:35:50.644449 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnqcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b7c7f79fb-x9lmq_calico-apiserver(c22b1229-18a4-4f62-9e22-d2ed4a3840d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:50.646231 kubelet[2887]: E0124 00:35:50.646124 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:35:50.855013 kubelet[2887]: E0124 00:35:50.854553 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:35:50.860239 kubelet[2887]: E0124 00:35:50.859763 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:35:50.860435 kubelet[2887]: E0124 00:35:50.860352 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:35:51.025000 audit[4871]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4871 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:51.025000 audit[4871]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd177201c0 a2=0 a3=7ffd177201ac items=0 ppid=3004 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:51.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:51.028000 audit[4871]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=4871 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:35:51.028000 audit[4871]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd177201c0 a2=0 a3=7ffd177201ac items=0 ppid=3004 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:35:51.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:35:51.469646 systemd-networkd[1573]: cali6e54ed2544a: Gained IPv6LL Jan 24 00:35:51.725354 systemd-networkd[1573]: cali1b97f5501cc: Gained IPv6LL Jan 24 00:35:51.860085 kubelet[2887]: E0124 00:35:51.860013 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:35:51.861247 kubelet[2887]: E0124 00:35:51.860828 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:35:58.657358 containerd[1677]: time="2026-01-24T00:35:58.657080746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:35:59.020077 containerd[1677]: time="2026-01-24T00:35:59.019916529Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:35:59.022727 containerd[1677]: time="2026-01-24T00:35:59.022620226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:35:59.022727 containerd[1677]: time="2026-01-24T00:35:59.022640720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:35:59.023339 kubelet[2887]: E0124 00:35:59.022829 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:35:59.023339 kubelet[2887]: E0124 00:35:59.022869 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:35:59.023339 kubelet[2887]: E0124 00:35:59.022963 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5f190ef84d634309a1c72b69d1983ada,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4mwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c97449468-s944b_calico-system(3de22d99-6b03-4053-8689-b6779dcd23d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:59.025899 containerd[1677]: time="2026-01-24T00:35:59.025874621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:35:59.357264 containerd[1677]: time="2026-01-24T00:35:59.356955216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:35:59.358739 containerd[1677]: time="2026-01-24T00:35:59.358615552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:35:59.358832 containerd[1677]: time="2026-01-24T00:35:59.358700820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:35:59.359092 kubelet[2887]: E0124 00:35:59.359021 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:35:59.359155 kubelet[2887]: E0124 00:35:59.359106 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:35:59.359574 kubelet[2887]: E0124 00:35:59.359349 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4mwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c97449468-s944b_calico-system(3de22d99-6b03-4053-8689-b6779dcd23d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:59.360647 kubelet[2887]: E0124 00:35:59.360572 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:36:01.656884 containerd[1677]: time="2026-01-24T00:36:01.656663729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:36:01.989947 containerd[1677]: time="2026-01-24T00:36:01.989714774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:01.990856 containerd[1677]: time="2026-01-24T00:36:01.990776809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:36:01.990912 containerd[1677]: time="2026-01-24T00:36:01.990842033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:01.991052 kubelet[2887]: E0124 00:36:01.991003 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:36:01.991052 kubelet[2887]: E0124 00:36:01.991047 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:36:01.991391 kubelet[2887]: E0124 00:36:01.991181 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:01.993882 containerd[1677]: time="2026-01-24T00:36:01.993848130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:36:02.327520 containerd[1677]: time="2026-01-24T00:36:02.327479622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:02.328600 containerd[1677]: time="2026-01-24T00:36:02.328555602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:36:02.328655 containerd[1677]: time="2026-01-24T00:36:02.328631611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:02.328822 kubelet[2887]: E0124 00:36:02.328781 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:36:02.328872 kubelet[2887]: E0124 00:36:02.328828 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:36:02.328977 kubelet[2887]: E0124 00:36:02.328938 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:02.330410 kubelet[2887]: E0124 00:36:02.330380 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:36:02.657632 containerd[1677]: time="2026-01-24T00:36:02.657500109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:36:02.998952 containerd[1677]: time="2026-01-24T00:36:02.998900603Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:03.000937 containerd[1677]: time="2026-01-24T00:36:03.000873837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:36:03.001057 containerd[1677]: time="2026-01-24T00:36:03.000995645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:03.001333 kubelet[2887]: E0124 00:36:03.001187 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:03.001333 kubelet[2887]: E0124 00:36:03.001248 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:03.002657 kubelet[2887]: E0124 00:36:03.001665 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnqcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b7c7f79fb-x9lmq_calico-apiserver(c22b1229-18a4-4f62-9e22-d2ed4a3840d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:03.003829 kubelet[2887]: E0124 00:36:03.003791 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:36:03.658530 containerd[1677]: time="2026-01-24T00:36:03.657688282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:36:03.992009 containerd[1677]: time="2026-01-24T00:36:03.991755340Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:03.993098 containerd[1677]: time="2026-01-24T00:36:03.992999175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:36:03.993098 containerd[1677]: time="2026-01-24T00:36:03.993072798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:03.993337 kubelet[2887]: E0124 00:36:03.993294 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:03.993426 kubelet[2887]: E0124 00:36:03.993354 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:03.994234 kubelet[2887]: E0124 00:36:03.993504 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b7c7f79fb-9wsxh_calico-apiserver(bd7b47a3-e9a9-4695-b299-4e30c1f99caf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:03.995416 kubelet[2887]: E0124 00:36:03.995365 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:36:04.657346 containerd[1677]: time="2026-01-24T00:36:04.657305760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:36:04.989708 containerd[1677]: time="2026-01-24T00:36:04.989580217Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:04.990823 containerd[1677]: time="2026-01-24T00:36:04.990792250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:36:04.990937 containerd[1677]: time="2026-01-24T00:36:04.990859614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:04.991038 kubelet[2887]: E0124 00:36:04.991005 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:36:04.991421 kubelet[2887]: E0124 00:36:04.991045 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:36:04.991421 kubelet[2887]: E0124 00:36:04.991162 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghhgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dmlzg_calico-system(cc6833e1-bfb0-4eb5-9ff2-60bda2e93290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:04.992353 kubelet[2887]: E0124 00:36:04.992330 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:36:05.660351 containerd[1677]: time="2026-01-24T00:36:05.660231263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:36:05.985873 containerd[1677]: time="2026-01-24T00:36:05.985747109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:05.987614 containerd[1677]: time="2026-01-24T00:36:05.987559210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:36:05.987711 containerd[1677]: time="2026-01-24T00:36:05.987657208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:05.988088 kubelet[2887]: E0124 00:36:05.987856 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:36:05.988088 kubelet[2887]: E0124 00:36:05.987905 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:36:05.988088 kubelet[2887]: E0124 00:36:05.988033 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dj67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64cbbc8dcd-w5nn4_calico-system(f695da60-5d07-4b6c-8f24-e49612d3b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:05.989527 kubelet[2887]: E0124 00:36:05.989496 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:36:12.657107 kubelet[2887]: E0124 00:36:12.657067 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:36:16.655855 kubelet[2887]: E0124 00:36:16.655811 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:36:17.658106 kubelet[2887]: E0124 00:36:17.658049 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:36:17.660336 kubelet[2887]: E0124 00:36:17.660301 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:36:18.659068 kubelet[2887]: E0124 00:36:18.658628 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:36:18.660274 kubelet[2887]: E0124 00:36:18.659798 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:36:27.658965 containerd[1677]: time="2026-01-24T00:36:27.658930740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:36:27.993881 containerd[1677]: time="2026-01-24T00:36:27.993834547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:27.998345 containerd[1677]: time="2026-01-24T00:36:27.998299877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:36:27.998582 containerd[1677]: time="2026-01-24T00:36:27.998406968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:27.998822 kubelet[2887]: E0124 00:36:27.998698 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:36:27.998822 kubelet[2887]: E0124 00:36:27.998763 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:36:27.999551 kubelet[2887]: E0124 00:36:27.999455 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5f190ef84d634309a1c72b69d1983ada,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4mwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c97449468-s944b_calico-system(3de22d99-6b03-4053-8689-b6779dcd23d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:28.003011 containerd[1677]: time="2026-01-24T00:36:28.002985825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:36:28.339201 containerd[1677]: time="2026-01-24T00:36:28.338568600Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:28.340611 containerd[1677]: time="2026-01-24T00:36:28.340388816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:36:28.340611 containerd[1677]: time="2026-01-24T00:36:28.340450454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:28.341157 kubelet[2887]: E0124 00:36:28.341076 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:36:28.341157 kubelet[2887]: E0124 00:36:28.341139 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:36:28.341423 kubelet[2887]: E0124 00:36:28.341394 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4mwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c97449468-s944b_calico-system(3de22d99-6b03-4053-8689-b6779dcd23d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:28.342951 kubelet[2887]: E0124 00:36:28.342901 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:36:28.658380 containerd[1677]: time="2026-01-24T00:36:28.657731772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:36:28.989756 containerd[1677]: time="2026-01-24T00:36:28.989353228Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:28.991703 containerd[1677]: time="2026-01-24T00:36:28.991525027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:36:28.991703 containerd[1677]: time="2026-01-24T00:36:28.991638138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:28.991906 kubelet[2887]: E0124 00:36:28.991859 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:36:28.991957 kubelet[2887]: E0124 00:36:28.991926 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:36:28.992235 kubelet[2887]: E0124 00:36:28.992171 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:28.995122 containerd[1677]: time="2026-01-24T00:36:28.994863162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:36:29.321533 containerd[1677]: time="2026-01-24T00:36:29.321041599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:29.323898 containerd[1677]: time="2026-01-24T00:36:29.322127984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:36:29.323898 containerd[1677]: time="2026-01-24T00:36:29.322231729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:29.324340 kubelet[2887]: E0124 00:36:29.324304 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:36:29.324601 kubelet[2887]: E0124 00:36:29.324351 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:36:29.324601 kubelet[2887]: E0124 00:36:29.324469 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:29.325844 kubelet[2887]: E0124 00:36:29.325817 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:36:29.658561 containerd[1677]: time="2026-01-24T00:36:29.658318496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:36:29.974347 containerd[1677]: time="2026-01-24T00:36:29.974249879Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:29.975396 containerd[1677]: time="2026-01-24T00:36:29.975366252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:36:29.975473 containerd[1677]: time="2026-01-24T00:36:29.975434816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:29.975584 kubelet[2887]: E0124 00:36:29.975555 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:29.975621 kubelet[2887]: E0124 00:36:29.975595 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:29.976178 kubelet[2887]: E0124 00:36:29.975828 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b7c7f79fb-9wsxh_calico-apiserver(bd7b47a3-e9a9-4695-b299-4e30c1f99caf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:29.976326 containerd[1677]: time="2026-01-24T00:36:29.976029636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:36:29.977547 kubelet[2887]: E0124 00:36:29.977522 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:36:30.295808 containerd[1677]: time="2026-01-24T00:36:30.295766626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:30.297196 containerd[1677]: time="2026-01-24T00:36:30.297160287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:36:30.297276 containerd[1677]: time="2026-01-24T00:36:30.297245193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:30.297409 kubelet[2887]: E0124 00:36:30.297375 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:36:30.297456 kubelet[2887]: E0124 00:36:30.297416 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:36:30.297564 kubelet[2887]: E0124 00:36:30.297531 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghhgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dmlzg_calico-system(cc6833e1-bfb0-4eb5-9ff2-60bda2e93290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:30.298924 kubelet[2887]: E0124 00:36:30.298893 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:36:30.656441 containerd[1677]: time="2026-01-24T00:36:30.656334057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:36:31.003232 containerd[1677]: time="2026-01-24T00:36:31.003157176Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:31.004486 containerd[1677]: time="2026-01-24T00:36:31.004405891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:36:31.004486 containerd[1677]: time="2026-01-24T00:36:31.004455498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:31.004637 kubelet[2887]: E0124 00:36:31.004606 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:31.004876 kubelet[2887]: E0124 00:36:31.004647 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:36:31.004876 kubelet[2887]: E0124 00:36:31.004750 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnqcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b7c7f79fb-x9lmq_calico-apiserver(c22b1229-18a4-4f62-9e22-d2ed4a3840d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:31.006743 kubelet[2887]: E0124 00:36:31.006708 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:36:31.656784 containerd[1677]: time="2026-01-24T00:36:31.656746904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:36:31.988079 containerd[1677]: time="2026-01-24T00:36:31.987871166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:36:31.989124 containerd[1677]: time="2026-01-24T00:36:31.989030424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:36:31.989124 containerd[1677]: time="2026-01-24T00:36:31.989101348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:36:31.989360 kubelet[2887]: E0124 00:36:31.989328 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:36:31.989432 kubelet[2887]: E0124 00:36:31.989371 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:36:31.989551 kubelet[2887]: E0124 00:36:31.989489 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dj67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64cbbc8dcd-w5nn4_calico-system(f695da60-5d07-4b6c-8f24-e49612d3b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:36:31.991359 kubelet[2887]: E0124 00:36:31.991333 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:36:40.658024 kubelet[2887]: E0124 00:36:40.657987 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:36:41.657284 kubelet[2887]: E0124 00:36:41.657022 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:36:41.658167 kubelet[2887]: E0124 00:36:41.658096 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:36:42.657283 kubelet[2887]: E0124 00:36:42.657138 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:36:42.657495 kubelet[2887]: E0124 00:36:42.657413 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:36:44.656397 kubelet[2887]: E0124 00:36:44.656307 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:36:53.658606 kubelet[2887]: E0124 00:36:53.657464 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:36:53.660408 kubelet[2887]: E0124 00:36:53.658677 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:36:55.656991 kubelet[2887]: E0124 00:36:55.656600 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:36:55.657451 kubelet[2887]: E0124 00:36:55.657344 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:36:56.658583 kubelet[2887]: E0124 00:36:56.658525 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:36:58.657398 kubelet[2887]: E0124 00:36:58.657356 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:37:05.657160 kubelet[2887]: E0124 00:37:05.657108 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:37:06.656707 kubelet[2887]: E0124 00:37:06.656636 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:37:06.656707 kubelet[2887]: E0124 00:37:06.656706 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:37:08.656189 kubelet[2887]: E0124 00:37:08.655891 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:37:09.657097 kubelet[2887]: E0124 00:37:09.657043 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:37:13.656201 containerd[1677]: time="2026-01-24T00:37:13.656154744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:37:13.999367 containerd[1677]: time="2026-01-24T00:37:13.999302054Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:37:14.000732 containerd[1677]: time="2026-01-24T00:37:14.000686091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:37:14.001257 containerd[1677]: time="2026-01-24T00:37:14.000798182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:37:14.001669 kubelet[2887]: E0124 00:37:14.001425 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:37:14.001669 kubelet[2887]: E0124 00:37:14.001473 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:37:14.001669 kubelet[2887]: E0124 00:37:14.001623 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnqcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b7c7f79fb-x9lmq_calico-apiserver(c22b1229-18a4-4f62-9e22-d2ed4a3840d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:14.003205 kubelet[2887]: E0124 00:37:14.003171 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:37:17.657435 containerd[1677]: time="2026-01-24T00:37:17.657393925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:37:17.972694 containerd[1677]: time="2026-01-24T00:37:17.972459405Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:37:17.974435 containerd[1677]: time="2026-01-24T00:37:17.974349151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:37:17.974636 containerd[1677]: time="2026-01-24T00:37:17.974417695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:37:17.974878 kubelet[2887]: E0124 00:37:17.974838 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:37:17.975193 kubelet[2887]: E0124 00:37:17.974889 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:37:17.975324 kubelet[2887]: E0124 00:37:17.975012 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5f190ef84d634309a1c72b69d1983ada,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4mwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c97449468-s944b_calico-system(3de22d99-6b03-4053-8689-b6779dcd23d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:17.978149 containerd[1677]: time="2026-01-24T00:37:17.978091979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:37:18.317151 containerd[1677]: time="2026-01-24T00:37:18.316990677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:37:18.321228 containerd[1677]: time="2026-01-24T00:37:18.320237089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:37:18.321228 containerd[1677]: time="2026-01-24T00:37:18.320338668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:37:18.321533 kubelet[2887]: E0124 00:37:18.321496 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:37:18.321669 kubelet[2887]: E0124 00:37:18.321641 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:37:18.321896 kubelet[2887]: E0124 00:37:18.321796 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4mwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c97449468-s944b_calico-system(3de22d99-6b03-4053-8689-b6779dcd23d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:18.323301 kubelet[2887]: E0124 00:37:18.323258 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:37:19.660875 containerd[1677]: time="2026-01-24T00:37:19.660098636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:37:19.999446 containerd[1677]: time="2026-01-24T00:37:19.999397641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:37:20.000492 containerd[1677]: time="2026-01-24T00:37:20.000463827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:37:20.000600 containerd[1677]: time="2026-01-24T00:37:20.000535769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:37:20.000672 kubelet[2887]: E0124 00:37:20.000639 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:37:20.001016 kubelet[2887]: E0124 00:37:20.000681 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:37:20.001016 kubelet[2887]: E0124 00:37:20.000781 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:20.003607 containerd[1677]: time="2026-01-24T00:37:20.002963747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:37:20.535038 containerd[1677]: time="2026-01-24T00:37:20.534894766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:37:20.537218 containerd[1677]: time="2026-01-24T00:37:20.536089592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:37:20.537218 containerd[1677]: time="2026-01-24T00:37:20.536169079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:37:20.537509 kubelet[2887]: E0124 00:37:20.537448 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:37:20.537509 kubelet[2887]: E0124 00:37:20.537496 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:37:20.537925 kubelet[2887]: E0124 00:37:20.537694 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:20.539161 kubelet[2887]: E0124 00:37:20.539126 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:37:20.656679 containerd[1677]: time="2026-01-24T00:37:20.656300044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:37:20.985813 containerd[1677]: time="2026-01-24T00:37:20.985549935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:37:20.988052 containerd[1677]: time="2026-01-24T00:37:20.988004254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:37:20.988230 containerd[1677]: time="2026-01-24T00:37:20.988089696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:37:20.988410 kubelet[2887]: E0124 00:37:20.988335 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:37:20.988410 kubelet[2887]: E0124 00:37:20.988405 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:37:20.989256 kubelet[2887]: E0124 00:37:20.988570 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dj67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64cbbc8dcd-w5nn4_calico-system(f695da60-5d07-4b6c-8f24-e49612d3b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:20.989780 kubelet[2887]: E0124 00:37:20.989748 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:37:21.656874 containerd[1677]: time="2026-01-24T00:37:21.656540943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:37:21.982151 containerd[1677]: time="2026-01-24T00:37:21.981875538Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:37:21.983391 containerd[1677]: time="2026-01-24T00:37:21.983253849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:37:21.983391 containerd[1677]: time="2026-01-24T00:37:21.983352863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:37:21.983552 kubelet[2887]: E0124 00:37:21.983479 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:37:21.983552 kubelet[2887]: E0124 00:37:21.983521 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:37:21.983951 kubelet[2887]: E0124 00:37:21.983622 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b7c7f79fb-9wsxh_calico-apiserver(bd7b47a3-e9a9-4695-b299-4e30c1f99caf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:21.985041 kubelet[2887]: E0124 00:37:21.984987 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:37:24.656492 containerd[1677]: time="2026-01-24T00:37:24.656453569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:37:24.985040 containerd[1677]: time="2026-01-24T00:37:24.984686497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:37:24.986821 containerd[1677]: time="2026-01-24T00:37:24.986709737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:37:24.986821 containerd[1677]: time="2026-01-24T00:37:24.986745391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:37:24.987067 kubelet[2887]: E0124 00:37:24.986911 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:37:24.987067 kubelet[2887]: E0124 00:37:24.986970 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:37:24.987609 kubelet[2887]: E0124 00:37:24.987090 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghhgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dmlzg_calico-system(cc6833e1-bfb0-4eb5-9ff2-60bda2e93290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:37:24.988319 kubelet[2887]: E0124 00:37:24.988294 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:37:27.657176 kubelet[2887]: E0124 00:37:27.656828 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:37:30.656907 kubelet[2887]: E0124 00:37:30.656837 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:37:32.656093 kubelet[2887]: E0124 00:37:32.655998 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:37:34.129327 kernel: kauditd_printk_skb: 214 callbacks suppressed Jan 24 00:37:34.129424 kernel: audit: type=1130 audit(1769215054.124:738): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.1.115:22-4.153.228.146:56880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:34.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.1.115:22-4.153.228.146:56880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:34.125117 systemd[1]: Started sshd@9-10.0.1.115:22-4.153.228.146:56880.service - OpenSSH per-connection server daemon (4.153.228.146:56880). Jan 24 00:37:34.674000 audit[5024]: USER_ACCT pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:34.679274 kernel: audit: type=1101 audit(1769215054.674:739): pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:34.679309 sshd[5024]: Accepted publickey for core from 4.153.228.146 port 56880 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:37:34.679000 audit[5024]: CRED_ACQ pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:34.681241 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:34.685319 kernel: audit: type=1103 audit(1769215054.679:740): pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:34.688318 kernel: audit: type=1006 audit(1769215054.679:741): pid=5024 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 24 00:37:34.692236 systemd-logind[1654]: New session 11 of user core. Jan 24 00:37:34.679000 audit[5024]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe25d63c70 a2=3 a3=0 items=0 ppid=1 pid=5024 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:34.699789 kernel: audit: type=1300 audit(1769215054.679:741): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe25d63c70 a2=3 a3=0 items=0 ppid=1 pid=5024 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:34.699841 kernel: audit: type=1327 audit(1769215054.679:741): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:34.679000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:34.699123 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:37:34.702000 audit[5024]: USER_START pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:34.708258 kernel: audit: type=1105 audit(1769215054.702:742): pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:34.707000 audit[5028]: CRED_ACQ pid=5028 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:34.714229 kernel: audit: type=1103 audit(1769215054.707:743): pid=5028 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:35.049012 sshd[5028]: Connection closed by 4.153.228.146 port 56880 Jan 24 00:37:35.049345 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:35.049000 audit[5024]: USER_END pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:35.056746 kernel: audit: type=1106 audit(1769215055.049:744): pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:35.057515 systemd-logind[1654]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:37:35.058394 systemd[1]: sshd@9-10.0.1.115:22-4.153.228.146:56880.service: Deactivated successfully. Jan 24 00:37:35.060195 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:37:35.064745 systemd-logind[1654]: Removed session 11. Jan 24 00:37:35.049000 audit[5024]: CRED_DISP pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:35.069235 kernel: audit: type=1104 audit(1769215055.049:745): pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:35.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.1.115:22-4.153.228.146:56880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:35.658958 kubelet[2887]: E0124 00:37:35.658899 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:37:35.659987 kubelet[2887]: E0124 00:37:35.659338 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:37:37.657116 kubelet[2887]: E0124 00:37:37.657051 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:37:40.164161 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:37:40.164277 kernel: audit: type=1130 audit(1769215060.158:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.1.115:22-4.153.228.146:46890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:40.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.1.115:22-4.153.228.146:46890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:40.159699 systemd[1]: Started sshd@10-10.0.1.115:22-4.153.228.146:46890.service - OpenSSH per-connection server daemon (4.153.228.146:46890). Jan 24 00:37:40.708000 audit[5041]: USER_ACCT pid=5041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:40.711111 sshd[5041]: Accepted publickey for core from 4.153.228.146 port 46890 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:37:40.712359 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:40.710000 audit[5041]: CRED_ACQ pid=5041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:40.714509 kernel: audit: type=1101 audit(1769215060.708:748): pid=5041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:40.714557 kernel: audit: type=1103 audit(1769215060.710:749): pid=5041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:40.719192 kernel: audit: type=1006 audit(1769215060.710:750): pid=5041 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 24 00:37:40.710000 audit[5041]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb55f0710 a2=3 a3=0 items=0 ppid=1 pid=5041 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:40.722264 kernel: audit: type=1300 audit(1769215060.710:750): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb55f0710 a2=3 a3=0 items=0 ppid=1 pid=5041 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:40.721660 systemd-logind[1654]: New session 12 of user core. Jan 24 00:37:40.710000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:40.724692 kernel: audit: type=1327 audit(1769215060.710:750): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:40.727375 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:37:40.730000 audit[5041]: USER_START pid=5041 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:40.736234 kernel: audit: type=1105 audit(1769215060.730:751): pid=5041 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:40.733000 audit[5045]: CRED_ACQ pid=5045 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:40.740229 kernel: audit: type=1103 audit(1769215060.733:752): pid=5045 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:41.076754 sshd[5045]: Connection closed by 4.153.228.146 port 46890 Jan 24 00:37:41.077179 sshd-session[5041]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:41.077000 audit[5041]: USER_END pid=5041 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:41.081259 systemd[1]: sshd@10-10.0.1.115:22-4.153.228.146:46890.service: Deactivated successfully. Jan 24 00:37:41.083223 kernel: audit: type=1106 audit(1769215061.077:753): pid=5041 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:41.083923 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:37:41.085619 systemd-logind[1654]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:37:41.086554 systemd-logind[1654]: Removed session 12. Jan 24 00:37:41.077000 audit[5041]: CRED_DISP pid=5041 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:41.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.1.115:22-4.153.228.146:46890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:41.091263 kernel: audit: type=1104 audit(1769215061.077:754): pid=5041 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:42.657096 kubelet[2887]: E0124 00:37:42.657022 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:37:43.657466 kubelet[2887]: E0124 00:37:43.657261 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:37:46.185378 systemd[1]: Started sshd@11-10.0.1.115:22-4.153.228.146:42410.service - OpenSSH per-connection server daemon (4.153.228.146:42410). Jan 24 00:37:46.191600 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:37:46.191629 kernel: audit: type=1130 audit(1769215066.184:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.1.115:22-4.153.228.146:42410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:46.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.1.115:22-4.153.228.146:42410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:46.740000 audit[5084]: USER_ACCT pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:46.747461 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:46.748787 sshd[5084]: Accepted publickey for core from 4.153.228.146 port 42410 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:37:46.744000 audit[5084]: CRED_ACQ pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:46.752403 kernel: audit: type=1101 audit(1769215066.740:757): pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:46.752491 kernel: audit: type=1103 audit(1769215066.744:758): pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:46.744000 audit[5084]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec1462d50 a2=3 a3=0 items=0 ppid=1 pid=5084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:46.764062 kernel: audit: type=1006 audit(1769215066.744:759): pid=5084 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 24 00:37:46.764149 kernel: audit: type=1300 audit(1769215066.744:759): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec1462d50 a2=3 a3=0 items=0 ppid=1 pid=5084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:46.744000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:46.771235 kernel: audit: type=1327 audit(1769215066.744:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:46.774120 systemd-logind[1654]: New session 13 of user core. Jan 24 00:37:46.778446 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:37:46.782000 audit[5084]: USER_START pid=5084 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:46.790234 kernel: audit: type=1105 audit(1769215066.782:760): pid=5084 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:46.790985 kernel: audit: type=1103 audit(1769215066.789:761): pid=5088 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:46.789000 audit[5088]: CRED_ACQ pid=5088 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:47.143050 sshd[5088]: Connection closed by 4.153.228.146 port 42410 Jan 24 00:37:47.145289 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:47.145000 audit[5084]: USER_END pid=5084 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:47.152153 systemd[1]: sshd@11-10.0.1.115:22-4.153.228.146:42410.service: Deactivated successfully. Jan 24 00:37:47.152472 kernel: audit: type=1106 audit(1769215067.145:762): pid=5084 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:47.153663 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:37:47.145000 audit[5084]: CRED_DISP pid=5084 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:47.158873 systemd-logind[1654]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:37:47.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.1.115:22-4.153.228.146:42410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:47.159400 kernel: audit: type=1104 audit(1769215067.145:763): pid=5084 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:47.159829 systemd-logind[1654]: Removed session 13. Jan 24 00:37:47.250730 systemd[1]: Started sshd@12-10.0.1.115:22-4.153.228.146:42424.service - OpenSSH per-connection server daemon (4.153.228.146:42424). Jan 24 00:37:47.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.1.115:22-4.153.228.146:42424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:47.658313 kubelet[2887]: E0124 00:37:47.658259 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:37:47.796000 audit[5105]: USER_ACCT pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:47.797867 sshd[5105]: Accepted publickey for core from 4.153.228.146 port 42424 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:37:47.797000 audit[5105]: CRED_ACQ pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:47.797000 audit[5105]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc678915e0 a2=3 a3=0 items=0 ppid=1 pid=5105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:47.797000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:47.799370 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:47.804323 systemd-logind[1654]: New session 14 of user core. Jan 24 00:37:47.808424 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:37:47.810000 audit[5105]: USER_START pid=5105 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:47.812000 audit[5109]: CRED_ACQ pid=5109 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:48.188066 sshd[5109]: Connection closed by 4.153.228.146 port 42424 Jan 24 00:37:48.187469 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:48.187000 audit[5105]: USER_END pid=5105 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:48.187000 audit[5105]: CRED_DISP pid=5105 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:48.190797 systemd-logind[1654]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:37:48.191025 systemd[1]: sshd@12-10.0.1.115:22-4.153.228.146:42424.service: Deactivated successfully. Jan 24 00:37:48.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.1.115:22-4.153.228.146:42424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:48.193041 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:37:48.195430 systemd-logind[1654]: Removed session 14. Jan 24 00:37:48.293920 systemd[1]: Started sshd@13-10.0.1.115:22-4.153.228.146:42436.service - OpenSSH per-connection server daemon (4.153.228.146:42436). Jan 24 00:37:48.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.1.115:22-4.153.228.146:42436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:48.655847 kubelet[2887]: E0124 00:37:48.655751 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:37:48.656392 kubelet[2887]: E0124 00:37:48.656276 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:37:48.829000 audit[5118]: USER_ACCT pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:48.831103 sshd[5118]: Accepted publickey for core from 4.153.228.146 port 42436 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:37:48.830000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:48.830000 audit[5118]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd09a6d470 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:48.830000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:48.831983 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:48.839144 systemd-logind[1654]: New session 15 of user core. Jan 24 00:37:48.845520 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:37:48.848000 audit[5118]: USER_START pid=5118 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:48.849000 audit[5122]: CRED_ACQ pid=5122 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:49.199075 sshd[5122]: Connection closed by 4.153.228.146 port 42436 Jan 24 00:37:49.200791 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:49.200000 audit[5118]: USER_END pid=5118 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:49.200000 audit[5118]: CRED_DISP pid=5118 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:49.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.1.115:22-4.153.228.146:42436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:49.204679 systemd[1]: sshd@13-10.0.1.115:22-4.153.228.146:42436.service: Deactivated successfully. Jan 24 00:37:49.206783 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:37:49.208628 systemd-logind[1654]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:37:49.211087 systemd-logind[1654]: Removed session 15. Jan 24 00:37:49.656604 kubelet[2887]: E0124 00:37:49.656574 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:37:54.315492 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 24 00:37:54.315634 kernel: audit: type=1130 audit(1769215074.309:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.1.115:22-4.153.228.146:42442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:54.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.1.115:22-4.153.228.146:42442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:54.310488 systemd[1]: Started sshd@14-10.0.1.115:22-4.153.228.146:42442.service - OpenSSH per-connection server daemon (4.153.228.146:42442). Jan 24 00:37:54.859000 audit[5134]: USER_ACCT pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:54.864606 sshd[5134]: Accepted publickey for core from 4.153.228.146 port 42442 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:37:54.866316 kernel: audit: type=1101 audit(1769215074.859:784): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:54.865000 audit[5134]: CRED_ACQ pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:54.868551 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:54.871247 kernel: audit: type=1103 audit(1769215074.865:785): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:54.865000 audit[5134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce9c8d2d0 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:54.876566 kernel: audit: type=1006 audit(1769215074.865:786): pid=5134 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 24 00:37:54.876634 kernel: audit: type=1300 audit(1769215074.865:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce9c8d2d0 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:54.865000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:54.882224 kernel: audit: type=1327 audit(1769215074.865:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:54.884272 systemd-logind[1654]: New session 16 of user core. Jan 24 00:37:54.894547 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:37:54.896000 audit[5134]: USER_START pid=5134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:54.898000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:54.903980 kernel: audit: type=1105 audit(1769215074.896:787): pid=5134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:54.904031 kernel: audit: type=1103 audit(1769215074.898:788): pid=5138 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:55.275346 sshd[5138]: Connection closed by 4.153.228.146 port 42442 Jan 24 00:37:55.277422 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:55.281000 audit[5134]: USER_END pid=5134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:55.288668 kernel: audit: type=1106 audit(1769215075.281:789): pid=5134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:55.290365 kernel: audit: type=1104 audit(1769215075.281:790): pid=5134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:55.281000 audit[5134]: CRED_DISP pid=5134 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:55.290065 systemd[1]: sshd@14-10.0.1.115:22-4.153.228.146:42442.service: Deactivated successfully. Jan 24 00:37:55.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.1.115:22-4.153.228.146:42442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:55.294807 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:37:55.296646 systemd-logind[1654]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:37:55.298565 systemd-logind[1654]: Removed session 16. Jan 24 00:37:55.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.1.115:22-4.153.228.146:54748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:55.389568 systemd[1]: Started sshd@15-10.0.1.115:22-4.153.228.146:54748.service - OpenSSH per-connection server daemon (4.153.228.146:54748). Jan 24 00:37:55.942000 audit[5151]: USER_ACCT pid=5151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:55.942463 sshd[5151]: Accepted publickey for core from 4.153.228.146 port 54748 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:37:55.942000 audit[5151]: CRED_ACQ pid=5151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:55.943000 audit[5151]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd38cd7f50 a2=3 a3=0 items=0 ppid=1 pid=5151 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:55.943000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:55.944641 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:55.948690 systemd-logind[1654]: New session 17 of user core. Jan 24 00:37:55.955388 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:37:55.957000 audit[5151]: USER_START pid=5151 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:55.959000 audit[5155]: CRED_ACQ pid=5155 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:56.549854 sshd[5155]: Connection closed by 4.153.228.146 port 54748 Jan 24 00:37:56.550459 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:56.552000 audit[5151]: USER_END pid=5151 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:56.553000 audit[5151]: CRED_DISP pid=5151 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:56.556454 systemd-logind[1654]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:37:56.556636 systemd[1]: sshd@15-10.0.1.115:22-4.153.228.146:54748.service: Deactivated successfully. Jan 24 00:37:56.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.1.115:22-4.153.228.146:54748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:56.560230 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:37:56.562992 systemd-logind[1654]: Removed session 17. Jan 24 00:37:56.665398 systemd[1]: Started sshd@16-10.0.1.115:22-4.153.228.146:54756.service - OpenSSH per-connection server daemon (4.153.228.146:54756). Jan 24 00:37:56.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.1.115:22-4.153.228.146:54756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:57.215000 audit[5166]: USER_ACCT pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:57.215969 sshd[5166]: Accepted publickey for core from 4.153.228.146 port 54756 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:37:57.216000 audit[5166]: CRED_ACQ pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:57.216000 audit[5166]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff452da870 a2=3 a3=0 items=0 ppid=1 pid=5166 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:57.216000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:57.218511 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:57.224411 systemd-logind[1654]: New session 18 of user core. Jan 24 00:37:57.230333 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:37:57.234000 audit[5166]: USER_START pid=5166 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:57.236000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:57.658456 kubelet[2887]: E0124 00:37:57.658397 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:37:57.659563 kubelet[2887]: E0124 00:37:57.659112 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:37:58.031000 audit[5180]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:37:58.031000 audit[5180]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc81629d10 a2=0 a3=7ffc81629cfc items=0 ppid=3004 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:58.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:37:58.033000 audit[5180]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:37:58.033000 audit[5180]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc81629d10 a2=0 a3=0 items=0 ppid=3004 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:58.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:37:58.058000 audit[5182]: NETFILTER_CFG table=filter:144 family=2 entries=38 op=nft_register_rule pid=5182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:37:58.058000 audit[5182]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc71576c30 a2=0 a3=7ffc71576c1c items=0 ppid=3004 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:58.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:37:58.061000 audit[5182]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:37:58.061000 audit[5182]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc71576c30 a2=0 a3=0 items=0 ppid=3004 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:58.061000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:37:58.143826 sshd[5170]: Connection closed by 4.153.228.146 port 54756 Jan 24 00:37:58.145076 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:58.146000 audit[5166]: USER_END pid=5166 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:58.146000 audit[5166]: CRED_DISP pid=5166 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:58.148808 systemd[1]: sshd@16-10.0.1.115:22-4.153.228.146:54756.service: Deactivated successfully. Jan 24 00:37:58.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.1.115:22-4.153.228.146:54756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:58.150809 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:37:58.152348 systemd-logind[1654]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:37:58.153054 systemd-logind[1654]: Removed session 18. Jan 24 00:37:58.255660 systemd[1]: Started sshd@17-10.0.1.115:22-4.153.228.146:54760.service - OpenSSH per-connection server daemon (4.153.228.146:54760). Jan 24 00:37:58.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.1.115:22-4.153.228.146:54760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:58.796000 audit[5187]: USER_ACCT pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:58.796808 sshd[5187]: Accepted publickey for core from 4.153.228.146 port 54760 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:37:58.797000 audit[5187]: CRED_ACQ pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:58.797000 audit[5187]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd688ba30 a2=3 a3=0 items=0 ppid=1 pid=5187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:58.797000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:58.798287 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:58.803616 systemd-logind[1654]: New session 19 of user core. Jan 24 00:37:58.809387 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:37:58.812000 audit[5187]: USER_START pid=5187 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:58.814000 audit[5191]: CRED_ACQ pid=5191 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:59.278281 sshd[5191]: Connection closed by 4.153.228.146 port 54760 Jan 24 00:37:59.278147 sshd-session[5187]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:59.281000 audit[5187]: USER_END pid=5187 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:59.281000 audit[5187]: CRED_DISP pid=5187 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:59.283673 systemd[1]: sshd@17-10.0.1.115:22-4.153.228.146:54760.service: Deactivated successfully. Jan 24 00:37:59.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.1.115:22-4.153.228.146:54760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:59.286517 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:37:59.288003 systemd-logind[1654]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:37:59.290259 systemd-logind[1654]: Removed session 19. Jan 24 00:37:59.387665 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 24 00:37:59.387755 kernel: audit: type=1130 audit(1769215079.386:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.1.115:22-4.153.228.146:54766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:59.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.1.115:22-4.153.228.146:54766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:37:59.386534 systemd[1]: Started sshd@18-10.0.1.115:22-4.153.228.146:54766.service - OpenSSH per-connection server daemon (4.153.228.146:54766). Jan 24 00:37:59.929000 audit[5201]: USER_ACCT pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:59.933496 sshd[5201]: Accepted publickey for core from 4.153.228.146 port 54766 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:37:59.935248 kernel: audit: type=1101 audit(1769215079.929:824): pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:59.935329 kernel: audit: type=1103 audit(1769215079.932:825): pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:59.932000 audit[5201]: CRED_ACQ pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:59.935674 sshd-session[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:59.940144 kernel: audit: type=1006 audit(1769215079.932:826): pid=5201 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 24 00:37:59.932000 audit[5201]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0c8d2710 a2=3 a3=0 items=0 ppid=1 pid=5201 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:59.943546 kernel: audit: type=1300 audit(1769215079.932:826): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0c8d2710 a2=3 a3=0 items=0 ppid=1 pid=5201 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:37:59.944198 systemd-logind[1654]: New session 20 of user core. Jan 24 00:37:59.932000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:59.946691 kernel: audit: type=1327 audit(1769215079.932:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:37:59.949356 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:37:59.952000 audit[5201]: USER_START pid=5201 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:59.957000 audit[5205]: CRED_ACQ pid=5205 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:59.959264 kernel: audit: type=1105 audit(1769215079.952:827): pid=5201 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:37:59.959333 kernel: audit: type=1103 audit(1769215079.957:828): pid=5205 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:00.309234 sshd[5205]: Connection closed by 4.153.228.146 port 54766 Jan 24 00:38:00.309733 sshd-session[5201]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:00.310000 audit[5201]: USER_END pid=5201 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:00.313249 systemd[1]: sshd@18-10.0.1.115:22-4.153.228.146:54766.service: Deactivated successfully. Jan 24 00:38:00.316501 kernel: audit: type=1106 audit(1769215080.310:829): pid=5201 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:00.315820 systemd-logind[1654]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:38:00.316330 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:38:00.319160 systemd-logind[1654]: Removed session 20. Jan 24 00:38:00.310000 audit[5201]: CRED_DISP pid=5201 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:00.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.1.115:22-4.153.228.146:54766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:00.324222 kernel: audit: type=1104 audit(1769215080.310:830): pid=5201 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:00.656507 kubelet[2887]: E0124 00:38:00.656234 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:38:00.656507 kubelet[2887]: E0124 00:38:00.656435 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:38:00.657131 kubelet[2887]: E0124 00:38:00.656822 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:38:02.614000 audit[5217]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:38:02.614000 audit[5217]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb9347c80 a2=0 a3=7ffeb9347c6c items=0 ppid=3004 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:02.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:38:02.622000 audit[5217]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:38:02.622000 audit[5217]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffeb9347c80 a2=0 a3=7ffeb9347c6c items=0 ppid=3004 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:02.622000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:38:03.660915 kubelet[2887]: E0124 00:38:03.660880 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:38:05.425567 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 24 00:38:05.425660 kernel: audit: type=1130 audit(1769215085.420:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.1.115:22-4.153.228.146:48058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:05.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.1.115:22-4.153.228.146:48058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:05.420550 systemd[1]: Started sshd@19-10.0.1.115:22-4.153.228.146:48058.service - OpenSSH per-connection server daemon (4.153.228.146:48058). Jan 24 00:38:05.958000 audit[5219]: USER_ACCT pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:05.960295 sshd[5219]: Accepted publickey for core from 4.153.228.146 port 48058 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:38:05.962137 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:05.963292 kernel: audit: type=1101 audit(1769215085.958:835): pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:05.960000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:05.968916 kernel: audit: type=1103 audit(1769215085.960:836): pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:05.968968 kernel: audit: type=1006 audit(1769215085.960:837): pid=5219 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 24 00:38:05.960000 audit[5219]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd78208e00 a2=3 a3=0 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:05.978492 kernel: audit: type=1300 audit(1769215085.960:837): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd78208e00 a2=3 a3=0 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:05.978538 kernel: audit: type=1327 audit(1769215085.960:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:38:05.960000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:38:05.978848 systemd-logind[1654]: New session 21 of user core. Jan 24 00:38:05.982406 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:38:05.986000 audit[5219]: USER_START pid=5219 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:05.992237 kernel: audit: type=1105 audit(1769215085.986:838): pid=5219 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:05.992000 audit[5225]: CRED_ACQ pid=5225 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:05.996258 kernel: audit: type=1103 audit(1769215085.992:839): pid=5225 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:06.340945 sshd[5225]: Connection closed by 4.153.228.146 port 48058 Jan 24 00:38:06.341786 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:06.342000 audit[5219]: USER_END pid=5219 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:06.348231 kernel: audit: type=1106 audit(1769215086.342:840): pid=5219 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:06.348321 systemd[1]: sshd@19-10.0.1.115:22-4.153.228.146:48058.service: Deactivated successfully. Jan 24 00:38:06.349912 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:38:06.342000 audit[5219]: CRED_DISP pid=5219 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:06.355283 kernel: audit: type=1104 audit(1769215086.342:841): pid=5219 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:06.354891 systemd-logind[1654]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:38:06.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.1.115:22-4.153.228.146:48058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:06.357182 systemd-logind[1654]: Removed session 21. Jan 24 00:38:08.656321 kubelet[2887]: E0124 00:38:08.656243 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:38:11.443983 systemd[1]: Started sshd@20-10.0.1.115:22-4.153.228.146:48060.service - OpenSSH per-connection server daemon (4.153.228.146:48060). Jan 24 00:38:11.450297 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:38:11.450365 kernel: audit: type=1130 audit(1769215091.444:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.1.115:22-4.153.228.146:48060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:11.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.1.115:22-4.153.228.146:48060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:11.658307 kubelet[2887]: E0124 00:38:11.658254 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:38:11.960288 kernel: audit: type=1101 audit(1769215091.954:844): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:11.954000 audit[5236]: USER_ACCT pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:11.960445 sshd[5236]: Accepted publickey for core from 4.153.228.146 port 48060 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:38:11.960913 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:11.959000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:11.967476 kernel: audit: type=1103 audit(1769215091.959:845): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:11.967531 kernel: audit: type=1006 audit(1769215091.959:846): pid=5236 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 24 00:38:11.959000 audit[5236]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0793b8b0 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:11.977284 kernel: audit: type=1300 audit(1769215091.959:846): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0793b8b0 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:11.977931 systemd-logind[1654]: New session 22 of user core. Jan 24 00:38:11.959000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:38:11.981234 kernel: audit: type=1327 audit(1769215091.959:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:38:11.982410 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:38:11.986000 audit[5236]: USER_START pid=5236 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:11.992412 kernel: audit: type=1105 audit(1769215091.986:847): pid=5236 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:11.992000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:11.996281 kernel: audit: type=1103 audit(1769215091.992:848): pid=5240 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:12.310651 sshd[5240]: Connection closed by 4.153.228.146 port 48060 Jan 24 00:38:12.309547 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:12.310000 audit[5236]: USER_END pid=5236 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:12.312880 systemd-logind[1654]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:38:12.314502 systemd[1]: sshd@20-10.0.1.115:22-4.153.228.146:48060.service: Deactivated successfully. Jan 24 00:38:12.316562 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:38:12.319263 kernel: audit: type=1106 audit(1769215092.310:849): pid=5236 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:12.319324 kernel: audit: type=1104 audit(1769215092.310:850): pid=5236 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:12.310000 audit[5236]: CRED_DISP pid=5236 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:12.318584 systemd-logind[1654]: Removed session 22. Jan 24 00:38:12.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.1.115:22-4.153.228.146:48060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:13.657172 kubelet[2887]: E0124 00:38:13.657141 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:38:14.657249 kubelet[2887]: E0124 00:38:14.656619 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:38:15.656697 kubelet[2887]: E0124 00:38:15.656377 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:38:17.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.1.115:22-4.153.228.146:45746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:17.424435 systemd[1]: Started sshd@21-10.0.1.115:22-4.153.228.146:45746.service - OpenSSH per-connection server daemon (4.153.228.146:45746). Jan 24 00:38:17.426543 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:38:17.426582 kernel: audit: type=1130 audit(1769215097.424:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.1.115:22-4.153.228.146:45746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:17.965000 audit[5280]: USER_ACCT pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:17.968028 sshd[5280]: Accepted publickey for core from 4.153.228.146 port 45746 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:38:17.972233 kernel: audit: type=1101 audit(1769215097.965:853): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:17.972610 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:17.970000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:17.978408 kernel: audit: type=1103 audit(1769215097.970:854): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:17.978474 kernel: audit: type=1006 audit(1769215097.970:855): pid=5280 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 24 00:38:17.970000 audit[5280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd89ac2960 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:17.987239 kernel: audit: type=1300 audit(1769215097.970:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd89ac2960 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:17.987306 kernel: audit: type=1327 audit(1769215097.970:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:38:17.970000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:38:17.984709 systemd-logind[1654]: New session 23 of user core. Jan 24 00:38:17.988595 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:38:17.997299 kernel: audit: type=1105 audit(1769215097.989:856): pid=5280 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:17.989000 audit[5280]: USER_START pid=5280 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:18.003276 kernel: audit: type=1103 audit(1769215097.998:857): pid=5284 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:17.998000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:18.334777 sshd[5284]: Connection closed by 4.153.228.146 port 45746 Jan 24 00:38:18.335268 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:18.335000 audit[5280]: USER_END pid=5280 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:18.341799 systemd[1]: sshd@21-10.0.1.115:22-4.153.228.146:45746.service: Deactivated successfully. Jan 24 00:38:18.346545 kernel: audit: type=1106 audit(1769215098.335:858): pid=5280 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:18.346605 kernel: audit: type=1104 audit(1769215098.335:859): pid=5280 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:18.335000 audit[5280]: CRED_DISP pid=5280 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:18.344980 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:38:18.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.1.115:22-4.153.228.146:45746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:18.348292 systemd-logind[1654]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:38:18.349046 systemd-logind[1654]: Removed session 23. Jan 24 00:38:18.656866 kubelet[2887]: E0124 00:38:18.656270 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:38:21.656918 kubelet[2887]: E0124 00:38:21.656751 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:38:23.446454 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:38:23.446578 kernel: audit: type=1130 audit(1769215103.440:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.1.115:22-4.153.228.146:45756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:23.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.1.115:22-4.153.228.146:45756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:23.441607 systemd[1]: Started sshd@22-10.0.1.115:22-4.153.228.146:45756.service - OpenSSH per-connection server daemon (4.153.228.146:45756). Jan 24 00:38:23.959000 audit[5295]: USER_ACCT pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:23.964335 sshd[5295]: Accepted publickey for core from 4.153.228.146 port 45756 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:38:23.966282 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:23.967336 kernel: audit: type=1101 audit(1769215103.959:862): pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:23.963000 audit[5295]: CRED_ACQ pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:23.974525 kernel: audit: type=1103 audit(1769215103.963:863): pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:23.974585 kernel: audit: type=1006 audit(1769215103.963:864): pid=5295 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 24 00:38:23.977248 kernel: audit: type=1300 audit(1769215103.963:864): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd83130a10 a2=3 a3=0 items=0 ppid=1 pid=5295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:23.963000 audit[5295]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd83130a10 a2=3 a3=0 items=0 ppid=1 pid=5295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:23.979269 systemd-logind[1654]: New session 24 of user core. Jan 24 00:38:23.963000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:38:23.982269 kernel: audit: type=1327 audit(1769215103.963:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:38:23.985380 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:38:23.987000 audit[5295]: USER_START pid=5295 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:23.992000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:23.995473 kernel: audit: type=1105 audit(1769215103.987:865): pid=5295 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:23.995522 kernel: audit: type=1103 audit(1769215103.992:866): pid=5299 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:24.366995 sshd[5299]: Connection closed by 4.153.228.146 port 45756 Jan 24 00:38:24.367693 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:24.367000 audit[5295]: USER_END pid=5295 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:24.370852 systemd-logind[1654]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:38:24.372742 systemd[1]: sshd@22-10.0.1.115:22-4.153.228.146:45756.service: Deactivated successfully. Jan 24 00:38:24.374353 kernel: audit: type=1106 audit(1769215104.367:867): pid=5295 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:24.374764 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:38:24.367000 audit[5295]: CRED_DISP pid=5295 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:24.378050 systemd-logind[1654]: Removed session 24. Jan 24 00:38:24.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.1.115:22-4.153.228.146:45756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:24.381686 kernel: audit: type=1104 audit(1769215104.367:868): pid=5295 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:24.656799 kubelet[2887]: E0124 00:38:24.656300 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:38:25.658173 kubelet[2887]: E0124 00:38:25.657648 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:38:26.657276 kubelet[2887]: E0124 00:38:26.657120 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:38:28.655879 kubelet[2887]: E0124 00:38:28.655582 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:38:29.471836 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:38:29.471940 kernel: audit: type=1130 audit(1769215109.469:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.1.115:22-4.153.228.146:33992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:29.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.1.115:22-4.153.228.146:33992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:29.470877 systemd[1]: Started sshd@23-10.0.1.115:22-4.153.228.146:33992.service - OpenSSH per-connection server daemon (4.153.228.146:33992). Jan 24 00:38:29.656488 kubelet[2887]: E0124 00:38:29.656078 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:38:29.980406 sshd[5316]: Accepted publickey for core from 4.153.228.146 port 33992 ssh2: RSA SHA256:ITxVf3hbcD4SyUPJifz0ae7GnLqoM/nN+/wH9UHtMyI Jan 24 00:38:29.979000 audit[5316]: USER_ACCT pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:29.981814 sshd-session[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:38:29.979000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:29.986981 kernel: audit: type=1101 audit(1769215109.979:871): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:29.987031 kernel: audit: type=1103 audit(1769215109.979:872): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:29.989416 systemd-logind[1654]: New session 25 of user core. Jan 24 00:38:29.992237 kernel: audit: type=1006 audit(1769215109.979:873): pid=5316 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 24 00:38:29.979000 audit[5316]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd791b2050 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:29.979000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:38:29.997356 kernel: audit: type=1300 audit(1769215109.979:873): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd791b2050 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:29.997398 kernel: audit: type=1327 audit(1769215109.979:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:38:29.998384 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:38:30.000000 audit[5316]: USER_START pid=5316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:30.005000 audit[5322]: CRED_ACQ pid=5322 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:30.008482 kernel: audit: type=1105 audit(1769215110.000:874): pid=5316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:30.008534 kernel: audit: type=1103 audit(1769215110.005:875): pid=5322 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:30.320392 sshd[5322]: Connection closed by 4.153.228.146 port 33992 Jan 24 00:38:30.321345 sshd-session[5316]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:30.323000 audit[5316]: USER_END pid=5316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:30.323000 audit[5316]: CRED_DISP pid=5316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:30.331155 systemd[1]: sshd@23-10.0.1.115:22-4.153.228.146:33992.service: Deactivated successfully. Jan 24 00:38:30.332573 kernel: audit: type=1106 audit(1769215110.323:876): pid=5316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:30.333283 kernel: audit: type=1104 audit(1769215110.323:877): pid=5316 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 24 00:38:30.336413 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:38:30.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.1.115:22-4.153.228.146:33992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:38:30.340056 systemd-logind[1654]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:38:30.342710 systemd-logind[1654]: Removed session 25. Jan 24 00:38:35.658076 kubelet[2887]: E0124 00:38:35.658039 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:38:37.656722 kubelet[2887]: E0124 00:38:37.656616 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:38:38.657475 kubelet[2887]: E0124 00:38:38.657427 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:38:39.656404 containerd[1677]: time="2026-01-24T00:38:39.656329707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:38:39.986087 containerd[1677]: time="2026-01-24T00:38:39.985842962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:38:39.987431 containerd[1677]: time="2026-01-24T00:38:39.987328133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:38:39.987431 containerd[1677]: time="2026-01-24T00:38:39.987414659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:38:39.987704 kubelet[2887]: E0124 00:38:39.987665 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:38:39.988005 kubelet[2887]: E0124 00:38:39.987712 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:38:39.988005 kubelet[2887]: E0124 00:38:39.987820 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnqcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b7c7f79fb-x9lmq_calico-apiserver(c22b1229-18a4-4f62-9e22-d2ed4a3840d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:38:39.989239 kubelet[2887]: E0124 00:38:39.989196 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:38:42.657697 containerd[1677]: time="2026-01-24T00:38:42.657518989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:38:42.991064 containerd[1677]: time="2026-01-24T00:38:42.990749485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:38:42.992071 containerd[1677]: time="2026-01-24T00:38:42.992036923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:38:42.992147 containerd[1677]: time="2026-01-24T00:38:42.992111590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:38:42.992317 kubelet[2887]: E0124 00:38:42.992284 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:38:42.992554 kubelet[2887]: E0124 00:38:42.992327 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:38:42.992554 kubelet[2887]: E0124 00:38:42.992440 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b7c7f79fb-9wsxh_calico-apiserver(bd7b47a3-e9a9-4695-b299-4e30c1f99caf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:38:42.993831 kubelet[2887]: E0124 00:38:42.993802 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:38:44.656174 containerd[1677]: time="2026-01-24T00:38:44.656118337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:38:44.982560 containerd[1677]: time="2026-01-24T00:38:44.981770740Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:38:44.984288 containerd[1677]: time="2026-01-24T00:38:44.984248135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:38:44.984441 containerd[1677]: time="2026-01-24T00:38:44.984365152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:38:44.984617 kubelet[2887]: E0124 00:38:44.984582 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:38:44.984864 kubelet[2887]: E0124 00:38:44.984630 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:38:44.984969 kubelet[2887]: E0124 00:38:44.984937 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:38:44.987988 containerd[1677]: time="2026-01-24T00:38:44.987337057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:38:45.317275 containerd[1677]: time="2026-01-24T00:38:45.317232016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:38:45.318403 containerd[1677]: time="2026-01-24T00:38:45.318369924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:38:45.318472 containerd[1677]: time="2026-01-24T00:38:45.318440962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:38:45.318622 kubelet[2887]: E0124 00:38:45.318593 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:38:45.318664 kubelet[2887]: E0124 00:38:45.318633 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:38:45.318764 kubelet[2887]: E0124 00:38:45.318734 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x8bpc_calico-system(fe469055-bc9a-468d-9724-6bf26a67fb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:38:45.320344 kubelet[2887]: E0124 00:38:45.320315 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:38:47.657783 containerd[1677]: time="2026-01-24T00:38:47.657725691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:38:48.004254 containerd[1677]: time="2026-01-24T00:38:48.004048025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:38:48.006403 containerd[1677]: time="2026-01-24T00:38:48.006300402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:38:48.006403 containerd[1677]: time="2026-01-24T00:38:48.006372769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:38:48.006648 kubelet[2887]: E0124 00:38:48.006617 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:38:48.007241 kubelet[2887]: E0124 00:38:48.006947 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:38:48.007241 kubelet[2887]: E0124 00:38:48.007063 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5f190ef84d634309a1c72b69d1983ada,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4mwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c97449468-s944b_calico-system(3de22d99-6b03-4053-8689-b6779dcd23d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:38:48.009474 containerd[1677]: time="2026-01-24T00:38:48.009330457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:38:48.324381 containerd[1677]: time="2026-01-24T00:38:48.324171484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:38:48.325791 containerd[1677]: time="2026-01-24T00:38:48.325647542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:38:48.325791 containerd[1677]: time="2026-01-24T00:38:48.325708642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:38:48.326202 kubelet[2887]: E0124 00:38:48.326147 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:38:48.326202 kubelet[2887]: E0124 00:38:48.326193 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:38:48.326488 kubelet[2887]: E0124 00:38:48.326312 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4mwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c97449468-s944b_calico-system(3de22d99-6b03-4053-8689-b6779dcd23d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:38:48.327674 kubelet[2887]: E0124 00:38:48.327620 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:38:49.658909 containerd[1677]: time="2026-01-24T00:38:49.658852407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:38:50.002222 containerd[1677]: time="2026-01-24T00:38:50.002155717Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:38:50.003504 containerd[1677]: time="2026-01-24T00:38:50.003401581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:38:50.003585 containerd[1677]: time="2026-01-24T00:38:50.003565531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:38:50.003871 kubelet[2887]: E0124 00:38:50.003778 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:38:50.003871 kubelet[2887]: E0124 00:38:50.003851 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:38:50.004508 kubelet[2887]: E0124 00:38:50.004439 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dj67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64cbbc8dcd-w5nn4_calico-system(f695da60-5d07-4b6c-8f24-e49612d3b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:38:50.005663 kubelet[2887]: E0124 00:38:50.005618 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:38:50.658455 kubelet[2887]: E0124 00:38:50.658384 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:38:50.659768 containerd[1677]: time="2026-01-24T00:38:50.659440896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:38:50.996294 containerd[1677]: time="2026-01-24T00:38:50.996115177Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:38:50.997546 containerd[1677]: time="2026-01-24T00:38:50.997450378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:38:50.997819 containerd[1677]: time="2026-01-24T00:38:50.997689394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:38:50.998156 kubelet[2887]: E0124 00:38:50.998091 2887 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:38:50.998373 kubelet[2887]: E0124 00:38:50.998181 2887 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:38:50.998721 kubelet[2887]: E0124 00:38:50.998609 2887 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghhgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dmlzg_calico-system(cc6833e1-bfb0-4eb5-9ff2-60bda2e93290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:38:50.999963 kubelet[2887]: E0124 00:38:50.999893 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:38:56.220319 systemd[1]: cri-containerd-221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701.scope: Deactivated successfully. Jan 24 00:38:56.221730 systemd[1]: cri-containerd-221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701.scope: Consumed 25.032s CPU time, 115.3M memory peak. Jan 24 00:38:56.222906 containerd[1677]: time="2026-01-24T00:38:56.222866665Z" level=info msg="received container exit event container_id:\"221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701\" id:\"221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701\" pid:3217 exit_status:1 exited_at:{seconds:1769215136 nanos:221699342}" Jan 24 00:38:56.225541 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:38:56.225615 kernel: audit: type=1334 audit(1769215136.222:879): prog-id=144 op=UNLOAD Jan 24 00:38:56.222000 audit: BPF prog-id=144 op=UNLOAD Jan 24 00:38:56.222000 audit: BPF prog-id=148 op=UNLOAD Jan 24 00:38:56.231242 kernel: audit: type=1334 audit(1769215136.222:880): prog-id=148 op=UNLOAD Jan 24 00:38:56.259357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701-rootfs.mount: Deactivated successfully. Jan 24 00:38:56.656833 kubelet[2887]: E0124 00:38:56.656731 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-9wsxh" podUID="bd7b47a3-e9a9-4695-b299-4e30c1f99caf" Jan 24 00:38:56.688520 kubelet[2887]: E0124 00:38:56.688466 2887 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.1.115:47154->10.0.1.41:2379: read: connection timed out" Jan 24 00:38:57.276091 kubelet[2887]: I0124 00:38:57.275967 2887 scope.go:117] "RemoveContainer" containerID="221f5a43d4531824be4ef985b705c1da5feff95ae1a902a13dae1199e100e701" Jan 24 00:38:57.278585 containerd[1677]: time="2026-01-24T00:38:57.278497010Z" level=info msg="CreateContainer within sandbox \"f1a29b6028b971af70e75ce7fb9b76d552050963087f1b0cd716e711e63775b8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 24 00:38:57.298251 containerd[1677]: time="2026-01-24T00:38:57.296785118Z" level=info msg="Container ca3b8cce23932bffee5faa334911ed60102a1181d2dc7b7246b727767eecaf1b: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:38:57.312968 containerd[1677]: time="2026-01-24T00:38:57.312915779Z" level=info msg="CreateContainer within sandbox \"f1a29b6028b971af70e75ce7fb9b76d552050963087f1b0cd716e711e63775b8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ca3b8cce23932bffee5faa334911ed60102a1181d2dc7b7246b727767eecaf1b\"" Jan 24 00:38:57.313850 containerd[1677]: time="2026-01-24T00:38:57.313775193Z" level=info msg="StartContainer for \"ca3b8cce23932bffee5faa334911ed60102a1181d2dc7b7246b727767eecaf1b\"" Jan 24 00:38:57.315546 containerd[1677]: time="2026-01-24T00:38:57.315503034Z" level=info msg="connecting to shim ca3b8cce23932bffee5faa334911ed60102a1181d2dc7b7246b727767eecaf1b" address="unix:///run/containerd/s/47d84dd0a6390badaa3a9edfb56876fa8b14b74c62046063d10dbce964cca601" protocol=ttrpc version=3 Jan 24 00:38:57.343439 systemd[1]: Started cri-containerd-ca3b8cce23932bffee5faa334911ed60102a1181d2dc7b7246b727767eecaf1b.scope - libcontainer container ca3b8cce23932bffee5faa334911ed60102a1181d2dc7b7246b727767eecaf1b. Jan 24 00:38:57.359000 audit: BPF prog-id=254 op=LOAD Jan 24 00:38:57.363236 kernel: audit: type=1334 audit(1769215137.359:881): prog-id=254 op=LOAD Jan 24 00:38:57.359000 audit: BPF prog-id=255 op=LOAD Jan 24 00:38:57.359000 audit[5392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=3070 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:57.369747 kernel: audit: type=1334 audit(1769215137.359:882): prog-id=255 op=LOAD Jan 24 00:38:57.369804 kernel: audit: type=1300 audit(1769215137.359:882): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=3070 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:57.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361336238636365323339333262666665653566616133333439313165 Jan 24 00:38:57.359000 audit: BPF prog-id=255 op=UNLOAD Jan 24 00:38:57.382407 kernel: audit: type=1327 audit(1769215137.359:882): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361336238636365323339333262666665653566616133333439313165 Jan 24 00:38:57.382477 kernel: audit: type=1334 audit(1769215137.359:883): prog-id=255 op=UNLOAD Jan 24 00:38:57.359000 audit[5392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3070 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:57.392670 kernel: audit: type=1300 audit(1769215137.359:883): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3070 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:57.392734 kernel: audit: type=1327 audit(1769215137.359:883): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361336238636365323339333262666665653566616133333439313165 Jan 24 00:38:57.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361336238636365323339333262666665653566616133333439313165 Jan 24 00:38:57.359000 audit: BPF prog-id=256 op=LOAD Jan 24 00:38:57.397580 kernel: audit: type=1334 audit(1769215137.359:884): prog-id=256 op=LOAD Jan 24 00:38:57.359000 audit[5392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=3070 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:57.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361336238636365323339333262666665653566616133333439313165 Jan 24 00:38:57.359000 audit: BPF prog-id=257 op=LOAD Jan 24 00:38:57.359000 audit[5392]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=3070 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:57.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361336238636365323339333262666665653566616133333439313165 Jan 24 00:38:57.359000 audit: BPF prog-id=257 op=UNLOAD Jan 24 00:38:57.359000 audit[5392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3070 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:57.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361336238636365323339333262666665653566616133333439313165 Jan 24 00:38:57.359000 audit: BPF prog-id=256 op=UNLOAD Jan 24 00:38:57.359000 audit[5392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3070 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:57.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361336238636365323339333262666665653566616133333439313165 Jan 24 00:38:57.359000 audit: BPF prog-id=258 op=LOAD Jan 24 00:38:57.359000 audit[5392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=3070 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:57.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361336238636365323339333262666665653566616133333439313165 Jan 24 00:38:57.405148 containerd[1677]: time="2026-01-24T00:38:57.404986978Z" level=info msg="StartContainer for \"ca3b8cce23932bffee5faa334911ed60102a1181d2dc7b7246b727767eecaf1b\" returns successfully" Jan 24 00:38:57.803480 systemd[1]: cri-containerd-e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e.scope: Deactivated successfully. Jan 24 00:38:57.804236 systemd[1]: cri-containerd-e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e.scope: Consumed 3.214s CPU time, 57.2M memory peak, 192K read from disk. Jan 24 00:38:57.804000 audit: BPF prog-id=259 op=LOAD Jan 24 00:38:57.804000 audit: BPF prog-id=81 op=UNLOAD Jan 24 00:38:57.805000 audit: BPF prog-id=101 op=UNLOAD Jan 24 00:38:57.805000 audit: BPF prog-id=105 op=UNLOAD Jan 24 00:38:57.806610 containerd[1677]: time="2026-01-24T00:38:57.806569697Z" level=info msg="received container exit event container_id:\"e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e\" id:\"e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e\" pid:2737 exit_status:1 exited_at:{seconds:1769215137 nanos:805826015}" Jan 24 00:38:57.829420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e-rootfs.mount: Deactivated successfully. Jan 24 00:38:58.279784 kubelet[2887]: I0124 00:38:58.279755 2887 scope.go:117] "RemoveContainer" containerID="e3d9879fc76117b19febefa0e2b5d9941cc7d797afddb0a51bbed757ff229b1e" Jan 24 00:38:58.282408 containerd[1677]: time="2026-01-24T00:38:58.282374401Z" level=info msg="CreateContainer within sandbox \"55ad96b9c730f5fc1e29d58926b5f138f9f0d5343bda5b6d5d3e3002f923936f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 24 00:38:58.296193 containerd[1677]: time="2026-01-24T00:38:58.296154106Z" level=info msg="Container 3cc04fbb25b9dfda5379ed7fe21ad3a77b07015af39b2c2bffab6d2286bf416e: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:38:58.305737 containerd[1677]: time="2026-01-24T00:38:58.305655550Z" level=info msg="CreateContainer within sandbox \"55ad96b9c730f5fc1e29d58926b5f138f9f0d5343bda5b6d5d3e3002f923936f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3cc04fbb25b9dfda5379ed7fe21ad3a77b07015af39b2c2bffab6d2286bf416e\"" Jan 24 00:38:58.306324 containerd[1677]: time="2026-01-24T00:38:58.306286622Z" level=info msg="StartContainer for \"3cc04fbb25b9dfda5379ed7fe21ad3a77b07015af39b2c2bffab6d2286bf416e\"" Jan 24 00:38:58.308075 containerd[1677]: time="2026-01-24T00:38:58.308040871Z" level=info msg="connecting to shim 3cc04fbb25b9dfda5379ed7fe21ad3a77b07015af39b2c2bffab6d2286bf416e" address="unix:///run/containerd/s/0cba3ffebf644aa293862e21665a5523aaf422949f33d1c307e5dc482da6a391" protocol=ttrpc version=3 Jan 24 00:38:58.336475 systemd[1]: Started cri-containerd-3cc04fbb25b9dfda5379ed7fe21ad3a77b07015af39b2c2bffab6d2286bf416e.scope - libcontainer container 3cc04fbb25b9dfda5379ed7fe21ad3a77b07015af39b2c2bffab6d2286bf416e. Jan 24 00:38:58.351000 audit: BPF prog-id=260 op=LOAD Jan 24 00:38:58.351000 audit: BPF prog-id=261 op=LOAD Jan 24 00:38:58.351000 audit[5436]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2570 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:58.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633034666262323562396466646135333739656437666532316164 Jan 24 00:38:58.351000 audit: BPF prog-id=261 op=UNLOAD Jan 24 00:38:58.351000 audit[5436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2570 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:58.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633034666262323562396466646135333739656437666532316164 Jan 24 00:38:58.351000 audit: BPF prog-id=262 op=LOAD Jan 24 00:38:58.351000 audit[5436]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2570 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:58.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633034666262323562396466646135333739656437666532316164 Jan 24 00:38:58.351000 audit: BPF prog-id=263 op=LOAD Jan 24 00:38:58.351000 audit[5436]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2570 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:58.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633034666262323562396466646135333739656437666532316164 Jan 24 00:38:58.351000 audit: BPF prog-id=263 op=UNLOAD Jan 24 00:38:58.351000 audit[5436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2570 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:58.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633034666262323562396466646135333739656437666532316164 Jan 24 00:38:58.351000 audit: BPF prog-id=262 op=UNLOAD Jan 24 00:38:58.351000 audit[5436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2570 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:58.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633034666262323562396466646135333739656437666532316164 Jan 24 00:38:58.352000 audit: BPF prog-id=264 op=LOAD Jan 24 00:38:58.352000 audit[5436]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2570 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:38:58.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363633034666262323562396466646135333739656437666532316164 Jan 24 00:38:58.393086 containerd[1677]: time="2026-01-24T00:38:58.393038125Z" level=info msg="StartContainer for \"3cc04fbb25b9dfda5379ed7fe21ad3a77b07015af39b2c2bffab6d2286bf416e\" returns successfully" Jan 24 00:38:59.183791 kubelet[2887]: E0124 00:38:59.183658 2887 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.1.115:46952->10.0.1.41:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4593-0-0-7-bbab233dcd.188d83c92dc9e140 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4593-0-0-7-bbab233dcd,UID:2c7a4b6b05bddd2c840c0e29ffdd3885,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4593-0-0-7-bbab233dcd,},FirstTimestamp:2026-01-24 00:38:48.706343232 +0000 UTC m=+223.127274582,LastTimestamp:2026-01-24 00:38:48.706343232 +0000 UTC m=+223.127274582,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4593-0-0-7-bbab233dcd,}" Jan 24 00:38:59.657727 kubelet[2887]: E0124 00:38:59.657652 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x8bpc" podUID="fe469055-bc9a-468d-9724-6bf26a67fb3d" Jan 24 00:38:59.951623 kubelet[2887]: I0124 00:38:59.951509 2887 status_manager.go:890] "Failed to get status for pod" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.1.115:47056->10.0.1.41:2379: read: connection timed out" Jan 24 00:39:02.475128 systemd[1]: cri-containerd-d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1.scope: Deactivated successfully. Jan 24 00:39:02.475713 systemd[1]: cri-containerd-d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1.scope: Consumed 2.254s CPU time, 21.3M memory peak, 400K read from disk. Jan 24 00:39:02.481826 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 24 00:39:02.481937 kernel: audit: type=1334 audit(1769215142.476:901): prog-id=265 op=LOAD Jan 24 00:39:02.476000 audit: BPF prog-id=265 op=LOAD Jan 24 00:39:02.482034 containerd[1677]: time="2026-01-24T00:39:02.479662420Z" level=info msg="received container exit event container_id:\"d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1\" id:\"d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1\" pid:2745 exit_status:1 exited_at:{seconds:1769215142 nanos:479301186}" Jan 24 00:39:02.476000 audit: BPF prog-id=91 op=UNLOAD Jan 24 00:39:02.487407 kernel: audit: type=1334 audit(1769215142.476:902): prog-id=91 op=UNLOAD Jan 24 00:39:02.487482 kernel: audit: type=1334 audit(1769215142.477:903): prog-id=106 op=UNLOAD Jan 24 00:39:02.477000 audit: BPF prog-id=106 op=UNLOAD Jan 24 00:39:02.477000 audit: BPF prog-id=110 op=UNLOAD Jan 24 00:39:02.490313 kernel: audit: type=1334 audit(1769215142.477:904): prog-id=110 op=UNLOAD Jan 24 00:39:02.516419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1-rootfs.mount: Deactivated successfully. Jan 24 00:39:02.656592 kubelet[2887]: E0124 00:39:02.656544 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dmlzg" podUID="cc6833e1-bfb0-4eb5-9ff2-60bda2e93290" Jan 24 00:39:02.657184 kubelet[2887]: E0124 00:39:02.657143 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c97449468-s944b" podUID="3de22d99-6b03-4053-8689-b6779dcd23d2" Jan 24 00:39:03.299623 kubelet[2887]: I0124 00:39:03.299597 2887 scope.go:117] "RemoveContainer" containerID="d625589fe0ebc6edd221159dd15cd7b24f30cea9d485c5949c2143d4e315dcb1" Jan 24 00:39:03.301668 containerd[1677]: time="2026-01-24T00:39:03.301639419Z" level=info msg="CreateContainer within sandbox \"c4f23e1b1de3e348b17f9c2e0e8970509ede924f63ca49a08e2726556c1e2f8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 24 00:39:03.311058 containerd[1677]: time="2026-01-24T00:39:03.311029452Z" level=info msg="Container adaac170a298f194e6baa0242bda89147ffb76cd653a85b61be659fb6a25dd68: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:39:03.319340 containerd[1677]: time="2026-01-24T00:39:03.319306830Z" level=info msg="CreateContainer within sandbox \"c4f23e1b1de3e348b17f9c2e0e8970509ede924f63ca49a08e2726556c1e2f8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"adaac170a298f194e6baa0242bda89147ffb76cd653a85b61be659fb6a25dd68\"" Jan 24 00:39:03.319973 containerd[1677]: time="2026-01-24T00:39:03.319885214Z" level=info msg="StartContainer for \"adaac170a298f194e6baa0242bda89147ffb76cd653a85b61be659fb6a25dd68\"" Jan 24 00:39:03.321195 containerd[1677]: time="2026-01-24T00:39:03.321128783Z" level=info msg="connecting to shim adaac170a298f194e6baa0242bda89147ffb76cd653a85b61be659fb6a25dd68" address="unix:///run/containerd/s/9dc864d4726b4bf9a50968401b7d6c54f58039cd7fb131bb7b7a3e6e4d1e91a1" protocol=ttrpc version=3 Jan 24 00:39:03.343397 systemd[1]: Started cri-containerd-adaac170a298f194e6baa0242bda89147ffb76cd653a85b61be659fb6a25dd68.scope - libcontainer container adaac170a298f194e6baa0242bda89147ffb76cd653a85b61be659fb6a25dd68. Jan 24 00:39:03.353000 audit: BPF prog-id=266 op=LOAD Jan 24 00:39:03.357112 kernel: audit: type=1334 audit(1769215143.353:905): prog-id=266 op=LOAD Jan 24 00:39:03.357183 kernel: audit: type=1334 audit(1769215143.353:906): prog-id=267 op=LOAD Jan 24 00:39:03.353000 audit: BPF prog-id=267 op=LOAD Jan 24 00:39:03.353000 audit[5477]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2587 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:39:03.361386 kernel: audit: type=1300 audit(1769215143.353:906): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2587 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:39:03.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164616163313730613239386631393465366261613032343262646138 Jan 24 00:39:03.366628 kernel: audit: type=1327 audit(1769215143.353:906): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164616163313730613239386631393465366261613032343262646138 Jan 24 00:39:03.366680 kernel: audit: type=1334 audit(1769215143.353:907): prog-id=267 op=UNLOAD Jan 24 00:39:03.353000 audit: BPF prog-id=267 op=UNLOAD Jan 24 00:39:03.370613 kernel: audit: type=1300 audit(1769215143.353:907): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2587 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:39:03.353000 audit[5477]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2587 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:39:03.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164616163313730613239386631393465366261613032343262646138 Jan 24 00:39:03.353000 audit: BPF prog-id=268 op=LOAD Jan 24 00:39:03.353000 audit[5477]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2587 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:39:03.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164616163313730613239386631393465366261613032343262646138 Jan 24 00:39:03.353000 audit: BPF prog-id=269 op=LOAD Jan 24 00:39:03.353000 audit[5477]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2587 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:39:03.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164616163313730613239386631393465366261613032343262646138 Jan 24 00:39:03.353000 audit: BPF prog-id=269 op=UNLOAD Jan 24 00:39:03.353000 audit[5477]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2587 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:39:03.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164616163313730613239386631393465366261613032343262646138 Jan 24 00:39:03.353000 audit: BPF prog-id=268 op=UNLOAD Jan 24 00:39:03.353000 audit[5477]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2587 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:39:03.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164616163313730613239386631393465366261613032343262646138 Jan 24 00:39:03.353000 audit: BPF prog-id=270 op=LOAD Jan 24 00:39:03.353000 audit[5477]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2587 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:39:03.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164616163313730613239386631393465366261613032343262646138 Jan 24 00:39:03.389414 containerd[1677]: time="2026-01-24T00:39:03.389382314Z" level=info msg="StartContainer for \"adaac170a298f194e6baa0242bda89147ffb76cd653a85b61be659fb6a25dd68\" returns successfully" Jan 24 00:39:03.659922 kubelet[2887]: E0124 00:39:03.658376 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cbbc8dcd-w5nn4" podUID="f695da60-5d07-4b6c-8f24-e49612d3b40f" Jan 24 00:39:03.661037 kubelet[2887]: E0124 00:39:03.661002 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b7c7f79fb-x9lmq" podUID="c22b1229-18a4-4f62-9e22-d2ed4a3840d1" Jan 24 00:39:06.689720 kubelet[2887]: E0124 00:39:06.689393 2887 controller.go:195] "Failed to update lease" err="Put \"https://10.0.1.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4593-0-0-7-bbab233dcd?timeout=10s\": context deadline exceeded"