Jan 27 05:38:10.024215 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 03:09:34 -00 2026 Jan 27 05:38:10.024248 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=94a0aed2c135ea3629cf7bc829842658bafc4ce682f9974c582239b9a4f2cb9e Jan 27 05:38:10.024258 kernel: BIOS-provided physical RAM map: Jan 27 05:38:10.024265 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 27 05:38:10.024271 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 27 05:38:10.024277 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 27 05:38:10.024286 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 27 05:38:10.024293 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 27 05:38:10.024299 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 27 05:38:10.024306 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 27 05:38:10.024312 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e93efff] usable Jan 27 05:38:10.024318 kernel: BIOS-e820: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 27 05:38:10.024324 kernel: BIOS-e820: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 27 05:38:10.024331 kernel: BIOS-e820: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 27 05:38:10.024340 kernel: BIOS-e820: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 27 05:38:10.024347 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 27 05:38:10.024354 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 27 05:38:10.024361 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 27 05:38:10.024367 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 27 05:38:10.024374 kernel: BIOS-e820: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 27 05:38:10.024382 kernel: BIOS-e820: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 27 05:38:10.024389 kernel: BIOS-e820: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 27 05:38:10.024395 kernel: BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 27 05:38:10.024401 kernel: BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 27 05:38:10.024408 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 27 05:38:10.024415 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 27 05:38:10.024421 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 27 05:38:10.024428 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 27 05:38:10.024434 kernel: NX (Execute Disable) protection: active Jan 27 05:38:10.024441 kernel: APIC: Static calls initialized Jan 27 05:38:10.024447 kernel: e820: update [mem 0x7df7f018-0x7df88a57] usable ==> usable Jan 27 05:38:10.024456 kernel: e820: update [mem 0x7df57018-0x7df7e457] usable ==> usable Jan 27 05:38:10.024462 kernel: extended physical RAM map: Jan 27 05:38:10.024469 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 27 05:38:10.024476 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 27 05:38:10.024482 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 27 05:38:10.024489 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 27 05:38:10.024496 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 27 05:38:10.024502 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 27 05:38:10.024509 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 27 05:38:10.024520 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007df57017] usable Jan 27 05:38:10.024527 kernel: reserve setup_data: [mem 0x000000007df57018-0x000000007df7e457] usable Jan 27 05:38:10.024534 kernel: reserve setup_data: [mem 0x000000007df7e458-0x000000007df7f017] usable Jan 27 05:38:10.024541 kernel: reserve setup_data: [mem 0x000000007df7f018-0x000000007df88a57] usable Jan 27 05:38:10.024550 kernel: reserve setup_data: [mem 0x000000007df88a58-0x000000007e93efff] usable Jan 27 05:38:10.024557 kernel: reserve setup_data: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 27 05:38:10.024564 kernel: reserve setup_data: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 27 05:38:10.024571 kernel: reserve setup_data: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 27 05:38:10.024577 kernel: reserve setup_data: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 27 05:38:10.024584 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 27 05:38:10.024591 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 27 05:38:10.024598 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 27 05:38:10.024605 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 27 05:38:10.024612 kernel: reserve setup_data: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 27 05:38:10.024619 kernel: reserve setup_data: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 27 05:38:10.024628 kernel: reserve setup_data: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 27 05:38:10.024634 kernel: reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 27 05:38:10.024641 kernel: reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 27 05:38:10.024648 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 27 05:38:10.024655 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 27 05:38:10.024662 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 27 05:38:10.024669 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 27 05:38:10.024676 kernel: efi: EFI v2.7 by EDK II Jan 27 05:38:10.024683 kernel: efi: SMBIOS=0x7f972000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7dfd8018 RNG=0x7fb72018 Jan 27 05:38:10.024690 kernel: random: crng init done Jan 27 05:38:10.024697 kernel: efi: Remove mem139: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 27 05:38:10.024706 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 27 05:38:10.024713 kernel: secureboot: Secure boot disabled Jan 27 05:38:10.024720 kernel: SMBIOS 2.8 present. Jan 27 05:38:10.024727 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 27 05:38:10.024734 kernel: DMI: Memory slots populated: 1/1 Jan 27 05:38:10.024741 kernel: Hypervisor detected: KVM Jan 27 05:38:10.024748 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 27 05:38:10.024754 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 27 05:38:10.024761 kernel: kvm-clock: using sched offset of 5619795434 cycles Jan 27 05:38:10.024769 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 27 05:38:10.024778 kernel: tsc: Detected 2294.608 MHz processor Jan 27 05:38:10.024786 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 27 05:38:10.024794 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 27 05:38:10.024801 kernel: last_pfn = 0x180000 max_arch_pfn = 0x10000000000 Jan 27 05:38:10.024808 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 27 05:38:10.024816 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 27 05:38:10.024824 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 27 05:38:10.024831 kernel: Using GB pages for direct mapping Jan 27 05:38:10.024840 kernel: ACPI: Early table checksum verification disabled Jan 27 05:38:10.024848 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 27 05:38:10.024856 kernel: ACPI: XSDT 0x000000007FB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jan 27 05:38:10.024863 kernel: ACPI: FACP 0x000000007FB77000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 05:38:10.024870 kernel: ACPI: DSDT 0x000000007FB78000 00423C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 05:38:10.024878 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 27 05:38:10.024885 kernel: ACPI: APIC 0x000000007FB76000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 05:38:10.024894 kernel: ACPI: MCFG 0x000000007FB75000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 05:38:10.024902 kernel: ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 05:38:10.024909 kernel: ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 27 05:38:10.024917 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb77000-0x7fb770f3] Jan 27 05:38:10.024924 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb78000-0x7fb7c23b] Jan 27 05:38:10.024932 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 27 05:38:10.024939 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb76000-0x7fb7607f] Jan 27 05:38:10.024948 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb75000-0x7fb7503b] Jan 27 05:38:10.024955 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027] Jan 27 05:38:10.024963 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037] Jan 27 05:38:10.024970 kernel: No NUMA configuration found Jan 27 05:38:10.024977 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 27 05:38:10.024985 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jan 27 05:38:10.024992 kernel: Zone ranges: Jan 27 05:38:10.025000 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 27 05:38:10.025008 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 27 05:38:10.025016 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 27 05:38:10.025023 kernel: Device empty Jan 27 05:38:10.025045 kernel: Movable zone start for each node Jan 27 05:38:10.025053 kernel: Early memory node ranges Jan 27 05:38:10.025060 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 27 05:38:10.025070 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 27 05:38:10.025080 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 27 05:38:10.025087 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 27 05:38:10.025094 kernel: node 0: [mem 0x0000000000900000-0x000000007e93efff] Jan 27 05:38:10.025102 kernel: node 0: [mem 0x000000007ea00000-0x000000007ec70fff] Jan 27 05:38:10.025109 kernel: node 0: [mem 0x000000007ed85000-0x000000007f8ecfff] Jan 27 05:38:10.025122 kernel: node 0: [mem 0x000000007fbff000-0x000000007feaefff] Jan 27 05:38:10.025132 kernel: node 0: [mem 0x000000007feb5000-0x000000007feebfff] Jan 27 05:38:10.025140 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 27 05:38:10.025147 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 27 05:38:10.025155 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 27 05:38:10.025165 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 27 05:38:10.025173 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 27 05:38:10.025181 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 27 05:38:10.025189 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 27 05:38:10.025199 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 27 05:38:10.025206 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Jan 27 05:38:10.025214 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 27 05:38:10.025222 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 27 05:38:10.025231 kernel: On node 0, zone Normal: 276 pages in unavailable ranges Jan 27 05:38:10.025239 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 27 05:38:10.025247 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 27 05:38:10.025255 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 27 05:38:10.025265 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 27 05:38:10.025273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 27 05:38:10.025281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 27 05:38:10.025289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 27 05:38:10.025297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 27 05:38:10.025305 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 27 05:38:10.025313 kernel: TSC deadline timer available Jan 27 05:38:10.025323 kernel: CPU topo: Max. logical packages: 2 Jan 27 05:38:10.025331 kernel: CPU topo: Max. logical dies: 2 Jan 27 05:38:10.025339 kernel: CPU topo: Max. dies per package: 1 Jan 27 05:38:10.025347 kernel: CPU topo: Max. threads per core: 1 Jan 27 05:38:10.025355 kernel: CPU topo: Num. cores per package: 1 Jan 27 05:38:10.025363 kernel: CPU topo: Num. threads per package: 1 Jan 27 05:38:10.025371 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 27 05:38:10.025379 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 27 05:38:10.025388 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 27 05:38:10.025396 kernel: kvm-guest: setup PV sched yield Jan 27 05:38:10.025405 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 27 05:38:10.025413 kernel: Booting paravirtualized kernel on KVM Jan 27 05:38:10.025421 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 27 05:38:10.025429 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 27 05:38:10.025437 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 27 05:38:10.025447 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 27 05:38:10.025455 kernel: pcpu-alloc: [0] 0 1 Jan 27 05:38:10.025463 kernel: kvm-guest: PV spinlocks enabled Jan 27 05:38:10.025471 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 27 05:38:10.025480 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=94a0aed2c135ea3629cf7bc829842658bafc4ce682f9974c582239b9a4f2cb9e Jan 27 05:38:10.025489 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 27 05:38:10.025499 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 27 05:38:10.025507 kernel: Fallback order for Node 0: 0 Jan 27 05:38:10.025514 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1046694 Jan 27 05:38:10.025522 kernel: Policy zone: Normal Jan 27 05:38:10.025530 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 27 05:38:10.025538 kernel: software IO TLB: area num 2. Jan 27 05:38:10.025546 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 27 05:38:10.025556 kernel: ftrace: allocating 40128 entries in 157 pages Jan 27 05:38:10.025564 kernel: ftrace: allocated 157 pages with 5 groups Jan 27 05:38:10.025572 kernel: Dynamic Preempt: voluntary Jan 27 05:38:10.025580 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 27 05:38:10.025593 kernel: rcu: RCU event tracing is enabled. Jan 27 05:38:10.025601 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 27 05:38:10.025609 kernel: Trampoline variant of Tasks RCU enabled. Jan 27 05:38:10.025617 kernel: Rude variant of Tasks RCU enabled. Jan 27 05:38:10.025627 kernel: Tracing variant of Tasks RCU enabled. Jan 27 05:38:10.025634 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 27 05:38:10.025643 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 27 05:38:10.025651 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 27 05:38:10.025660 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 27 05:38:10.025668 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 27 05:38:10.025676 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 27 05:38:10.025685 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 27 05:38:10.025693 kernel: Console: colour dummy device 80x25 Jan 27 05:38:10.025701 kernel: printk: legacy console [tty0] enabled Jan 27 05:38:10.025710 kernel: printk: legacy console [ttyS0] enabled Jan 27 05:38:10.025718 kernel: ACPI: Core revision 20240827 Jan 27 05:38:10.025726 kernel: APIC: Switch to symmetric I/O mode setup Jan 27 05:38:10.025734 kernel: x2apic enabled Jan 27 05:38:10.025742 kernel: APIC: Switched APIC routing to: physical x2apic Jan 27 05:38:10.025752 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 27 05:38:10.025760 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 27 05:38:10.025767 kernel: kvm-guest: setup PV IPIs Jan 27 05:38:10.025775 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 27 05:38:10.025784 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 27 05:38:10.025792 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 27 05:38:10.025801 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 27 05:38:10.025809 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 27 05:38:10.025817 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 27 05:38:10.025824 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 27 05:38:10.025832 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 27 05:38:10.025840 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 27 05:38:10.025847 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 27 05:38:10.025855 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 27 05:38:10.025862 kernel: TAA: Mitigation: Clear CPU buffers Jan 27 05:38:10.025870 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 27 05:38:10.025877 kernel: active return thunk: its_return_thunk Jan 27 05:38:10.025886 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 27 05:38:10.025894 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 27 05:38:10.025902 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 27 05:38:10.025909 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 27 05:38:10.025917 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 27 05:38:10.025924 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 27 05:38:10.025932 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 27 05:38:10.025939 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 27 05:38:10.025947 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 27 05:38:10.025956 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 27 05:38:10.025964 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 27 05:38:10.025971 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 27 05:38:10.025979 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 27 05:38:10.025986 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 27 05:38:10.025994 kernel: Freeing SMP alternatives memory: 32K Jan 27 05:38:10.026001 kernel: pid_max: default: 32768 minimum: 301 Jan 27 05:38:10.026009 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 27 05:38:10.026016 kernel: landlock: Up and running. Jan 27 05:38:10.026024 kernel: SELinux: Initializing. Jan 27 05:38:10.028072 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 27 05:38:10.028092 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 27 05:38:10.028106 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Jan 27 05:38:10.028114 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Jan 27 05:38:10.028123 kernel: ... version: 2 Jan 27 05:38:10.028132 kernel: ... bit width: 48 Jan 27 05:38:10.028150 kernel: ... generic registers: 8 Jan 27 05:38:10.028159 kernel: ... value mask: 0000ffffffffffff Jan 27 05:38:10.028167 kernel: ... max period: 00007fffffffffff Jan 27 05:38:10.028177 kernel: ... fixed-purpose events: 3 Jan 27 05:38:10.028185 kernel: ... event mask: 00000007000000ff Jan 27 05:38:10.028194 kernel: signal: max sigframe size: 3632 Jan 27 05:38:10.028202 kernel: rcu: Hierarchical SRCU implementation. Jan 27 05:38:10.028211 kernel: rcu: Max phase no-delay instances is 400. Jan 27 05:38:10.028219 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 27 05:38:10.028228 kernel: smp: Bringing up secondary CPUs ... Jan 27 05:38:10.028236 kernel: smpboot: x86: Booting SMP configuration: Jan 27 05:38:10.028246 kernel: .... node #0, CPUs: #1 Jan 27 05:38:10.028254 kernel: smp: Brought up 1 node, 2 CPUs Jan 27 05:38:10.028263 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 27 05:38:10.028272 kernel: Memory: 3969764K/4186776K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 212136K reserved, 0K cma-reserved) Jan 27 05:38:10.028280 kernel: devtmpfs: initialized Jan 27 05:38:10.028288 kernel: x86/mm: Memory block size: 128MB Jan 27 05:38:10.028296 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 27 05:38:10.028307 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 27 05:38:10.028315 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 27 05:38:10.028323 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 27 05:38:10.028331 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feb3000-0x7feb4fff] (8192 bytes) Jan 27 05:38:10.028339 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes) Jan 27 05:38:10.028347 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 27 05:38:10.028357 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 27 05:38:10.028365 kernel: pinctrl core: initialized pinctrl subsystem Jan 27 05:38:10.028374 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 27 05:38:10.028382 kernel: audit: initializing netlink subsys (disabled) Jan 27 05:38:10.028390 kernel: audit: type=2000 audit(1769492286.120:1): state=initialized audit_enabled=0 res=1 Jan 27 05:38:10.028399 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 27 05:38:10.028407 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 27 05:38:10.028415 kernel: cpuidle: using governor menu Jan 27 05:38:10.028425 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 27 05:38:10.028433 kernel: dca service started, version 1.12.1 Jan 27 05:38:10.028441 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 27 05:38:10.028449 kernel: PCI: Using configuration type 1 for base access Jan 27 05:38:10.028458 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 27 05:38:10.028466 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 27 05:38:10.028474 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 27 05:38:10.028484 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 27 05:38:10.028492 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 27 05:38:10.028500 kernel: ACPI: Added _OSI(Module Device) Jan 27 05:38:10.028508 kernel: ACPI: Added _OSI(Processor Device) Jan 27 05:38:10.028516 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 27 05:38:10.028525 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 27 05:38:10.028533 kernel: ACPI: Interpreter enabled Jan 27 05:38:10.028542 kernel: ACPI: PM: (supports S0 S3 S5) Jan 27 05:38:10.028551 kernel: ACPI: Using IOAPIC for interrupt routing Jan 27 05:38:10.028559 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 27 05:38:10.028567 kernel: PCI: Using E820 reservations for host bridge windows Jan 27 05:38:10.028575 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 27 05:38:10.028583 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 27 05:38:10.028775 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 27 05:38:10.028883 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 27 05:38:10.028980 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 27 05:38:10.028991 kernel: PCI host bridge to bus 0000:00 Jan 27 05:38:10.029103 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 27 05:38:10.029192 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 27 05:38:10.029282 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 27 05:38:10.029368 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 27 05:38:10.029454 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 27 05:38:10.029539 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Jan 27 05:38:10.029625 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 27 05:38:10.029738 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 27 05:38:10.029847 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 27 05:38:10.029945 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 27 05:38:10.032000 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Jan 27 05:38:10.032169 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Jan 27 05:38:10.032270 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 27 05:38:10.032372 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 27 05:38:10.032478 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.032576 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Jan 27 05:38:10.032672 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 27 05:38:10.032770 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 27 05:38:10.032867 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 27 05:38:10.032968 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:38:10.033148 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.033250 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Jan 27 05:38:10.033347 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 27 05:38:10.033441 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 27 05:38:10.033540 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 27 05:38:10.033642 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.033738 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Jan 27 05:38:10.033833 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 27 05:38:10.033927 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 27 05:38:10.034020 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 27 05:38:10.036295 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.036408 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Jan 27 05:38:10.036509 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 27 05:38:10.036610 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 27 05:38:10.036707 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 27 05:38:10.036809 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.036913 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Jan 27 05:38:10.037010 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 27 05:38:10.037116 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 27 05:38:10.037212 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 27 05:38:10.037314 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.037410 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Jan 27 05:38:10.037508 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 27 05:38:10.037603 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 27 05:38:10.037697 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 27 05:38:10.037799 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.037895 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Jan 27 05:38:10.037994 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 27 05:38:10.040158 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 27 05:38:10.040264 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 27 05:38:10.040368 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.040464 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Jan 27 05:38:10.040561 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 27 05:38:10.040674 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 27 05:38:10.040771 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 27 05:38:10.040873 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.040970 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Jan 27 05:38:10.042413 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 27 05:38:10.042529 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 27 05:38:10.042632 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 27 05:38:10.042739 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.042850 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Jan 27 05:38:10.042963 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 27 05:38:10.045447 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 27 05:38:10.045557 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 27 05:38:10.045670 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.045767 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Jan 27 05:38:10.045863 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 27 05:38:10.045957 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 27 05:38:10.046062 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 27 05:38:10.046165 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.046266 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Jan 27 05:38:10.046364 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 27 05:38:10.046458 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 27 05:38:10.046556 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 27 05:38:10.046664 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.046763 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Jan 27 05:38:10.046858 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 27 05:38:10.046953 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 27 05:38:10.053794 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 27 05:38:10.053935 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.054048 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Jan 27 05:38:10.054148 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 27 05:38:10.054249 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 27 05:38:10.054344 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 27 05:38:10.054447 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.054545 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Jan 27 05:38:10.054639 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 27 05:38:10.054733 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 27 05:38:10.054828 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 27 05:38:10.054931 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.055027 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Jan 27 05:38:10.055141 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 27 05:38:10.055236 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 27 05:38:10.055329 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 27 05:38:10.055429 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.055524 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Jan 27 05:38:10.055620 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 27 05:38:10.055716 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 27 05:38:10.055810 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 27 05:38:10.055910 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.056006 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Jan 27 05:38:10.056117 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 27 05:38:10.056224 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 27 05:38:10.056322 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 27 05:38:10.056424 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.056520 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Jan 27 05:38:10.056615 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 27 05:38:10.056710 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 27 05:38:10.056804 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 27 05:38:10.056911 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.057007 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Jan 27 05:38:10.057607 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 27 05:38:10.057710 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 27 05:38:10.057807 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 27 05:38:10.059170 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.059286 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Jan 27 05:38:10.059385 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 27 05:38:10.059481 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 27 05:38:10.059577 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 27 05:38:10.059680 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.059776 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Jan 27 05:38:10.059875 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 27 05:38:10.059970 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 27 05:38:10.060077 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 27 05:38:10.060201 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.060302 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Jan 27 05:38:10.060397 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 27 05:38:10.060494 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 27 05:38:10.060589 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 27 05:38:10.060695 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.060795 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Jan 27 05:38:10.060890 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 27 05:38:10.060986 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 27 05:38:10.062182 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 27 05:38:10.062302 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.062401 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Jan 27 05:38:10.062503 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 27 05:38:10.062598 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 27 05:38:10.062694 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 27 05:38:10.062794 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.062890 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Jan 27 05:38:10.062985 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 27 05:38:10.063104 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 27 05:38:10.063199 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 27 05:38:10.063300 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.063395 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Jan 27 05:38:10.063490 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 27 05:38:10.063584 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 27 05:38:10.063681 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 27 05:38:10.063787 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.063883 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Jan 27 05:38:10.063976 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 27 05:38:10.065409 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 27 05:38:10.065519 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 27 05:38:10.065636 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:38:10.065733 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Jan 27 05:38:10.065827 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 27 05:38:10.065923 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 27 05:38:10.066017 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 27 05:38:10.067162 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 27 05:38:10.067270 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 27 05:38:10.067402 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 27 05:38:10.067499 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Jan 27 05:38:10.067594 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Jan 27 05:38:10.067697 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 27 05:38:10.067795 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Jan 27 05:38:10.067899 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 27 05:38:10.067996 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Jan 27 05:38:10.068101 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 27 05:38:10.068208 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 27 05:38:10.068305 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 27 05:38:10.068404 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:38:10.068503 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 27 05:38:10.068607 kernel: pci_bus 0000:02: extended config space not accessible Jan 27 05:38:10.068620 kernel: acpiphp: Slot [1] registered Jan 27 05:38:10.068629 kernel: acpiphp: Slot [0] registered Jan 27 05:38:10.068638 kernel: acpiphp: Slot [2] registered Jan 27 05:38:10.068650 kernel: acpiphp: Slot [3] registered Jan 27 05:38:10.068658 kernel: acpiphp: Slot [4] registered Jan 27 05:38:10.068666 kernel: acpiphp: Slot [5] registered Jan 27 05:38:10.068675 kernel: acpiphp: Slot [6] registered Jan 27 05:38:10.068683 kernel: acpiphp: Slot [7] registered Jan 27 05:38:10.068691 kernel: acpiphp: Slot [8] registered Jan 27 05:38:10.068700 kernel: acpiphp: Slot [9] registered Jan 27 05:38:10.068710 kernel: acpiphp: Slot [10] registered Jan 27 05:38:10.068719 kernel: acpiphp: Slot [11] registered Jan 27 05:38:10.068728 kernel: acpiphp: Slot [12] registered Jan 27 05:38:10.068736 kernel: acpiphp: Slot [13] registered Jan 27 05:38:10.068744 kernel: acpiphp: Slot [14] registered Jan 27 05:38:10.068753 kernel: acpiphp: Slot [15] registered Jan 27 05:38:10.068761 kernel: acpiphp: Slot [16] registered Jan 27 05:38:10.068770 kernel: acpiphp: Slot [17] registered Jan 27 05:38:10.068780 kernel: acpiphp: Slot [18] registered Jan 27 05:38:10.068788 kernel: acpiphp: Slot [19] registered Jan 27 05:38:10.068797 kernel: acpiphp: Slot [20] registered Jan 27 05:38:10.068805 kernel: acpiphp: Slot [21] registered Jan 27 05:38:10.068813 kernel: acpiphp: Slot [22] registered Jan 27 05:38:10.068822 kernel: acpiphp: Slot [23] registered Jan 27 05:38:10.068830 kernel: acpiphp: Slot [24] registered Jan 27 05:38:10.068841 kernel: acpiphp: Slot [25] registered Jan 27 05:38:10.068849 kernel: acpiphp: Slot [26] registered Jan 27 05:38:10.068858 kernel: acpiphp: Slot [27] registered Jan 27 05:38:10.068866 kernel: acpiphp: Slot [28] registered Jan 27 05:38:10.068874 kernel: acpiphp: Slot [29] registered Jan 27 05:38:10.068883 kernel: acpiphp: Slot [30] registered Jan 27 05:38:10.068891 kernel: acpiphp: Slot [31] registered Jan 27 05:38:10.069515 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 27 05:38:10.069631 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Jan 27 05:38:10.069735 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 27 05:38:10.069746 kernel: acpiphp: Slot [0-2] registered Jan 27 05:38:10.072115 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 27 05:38:10.072255 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Jan 27 05:38:10.072361 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Jan 27 05:38:10.072466 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 27 05:38:10.072566 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 27 05:38:10.072578 kernel: acpiphp: Slot [0-3] registered Jan 27 05:38:10.072686 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 27 05:38:10.072786 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Jan 27 05:38:10.072885 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Jan 27 05:38:10.072987 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 27 05:38:10.072998 kernel: acpiphp: Slot [0-4] registered Jan 27 05:38:10.073110 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 27 05:38:10.073210 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Jan 27 05:38:10.073307 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 27 05:38:10.073320 kernel: acpiphp: Slot [0-5] registered Jan 27 05:38:10.073426 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 27 05:38:10.073525 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Jan 27 05:38:10.073623 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Jan 27 05:38:10.073718 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 27 05:38:10.073729 kernel: acpiphp: Slot [0-6] registered Jan 27 05:38:10.073828 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 27 05:38:10.073839 kernel: acpiphp: Slot [0-7] registered Jan 27 05:38:10.073934 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 27 05:38:10.073946 kernel: acpiphp: Slot [0-8] registered Jan 27 05:38:10.074053 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 27 05:38:10.074065 kernel: acpiphp: Slot [0-9] registered Jan 27 05:38:10.074159 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 27 05:38:10.074173 kernel: acpiphp: Slot [0-10] registered Jan 27 05:38:10.074268 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 27 05:38:10.074279 kernel: acpiphp: Slot [0-11] registered Jan 27 05:38:10.074375 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 27 05:38:10.074386 kernel: acpiphp: Slot [0-12] registered Jan 27 05:38:10.074486 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 27 05:38:10.074500 kernel: acpiphp: Slot [0-13] registered Jan 27 05:38:10.074595 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 27 05:38:10.074607 kernel: acpiphp: Slot [0-14] registered Jan 27 05:38:10.074702 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 27 05:38:10.074714 kernel: acpiphp: Slot [0-15] registered Jan 27 05:38:10.074809 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 27 05:38:10.074822 kernel: acpiphp: Slot [0-16] registered Jan 27 05:38:10.074919 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 27 05:38:10.074930 kernel: acpiphp: Slot [0-17] registered Jan 27 05:38:10.075026 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 27 05:38:10.075828 kernel: acpiphp: Slot [0-18] registered Jan 27 05:38:10.075961 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 27 05:38:10.075974 kernel: acpiphp: Slot [0-19] registered Jan 27 05:38:10.076093 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 27 05:38:10.076105 kernel: acpiphp: Slot [0-20] registered Jan 27 05:38:10.076217 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 27 05:38:10.076228 kernel: acpiphp: Slot [0-21] registered Jan 27 05:38:10.076327 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 27 05:38:10.076338 kernel: acpiphp: Slot [0-22] registered Jan 27 05:38:10.076438 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 27 05:38:10.076450 kernel: acpiphp: Slot [0-23] registered Jan 27 05:38:10.076544 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 27 05:38:10.076555 kernel: acpiphp: Slot [0-24] registered Jan 27 05:38:10.076652 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 27 05:38:10.076663 kernel: acpiphp: Slot [0-25] registered Jan 27 05:38:10.076759 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 27 05:38:10.076773 kernel: acpiphp: Slot [0-26] registered Jan 27 05:38:10.076869 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 27 05:38:10.076880 kernel: acpiphp: Slot [0-27] registered Jan 27 05:38:10.076974 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 27 05:38:10.076985 kernel: acpiphp: Slot [0-28] registered Jan 27 05:38:10.077100 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 27 05:38:10.077114 kernel: acpiphp: Slot [0-29] registered Jan 27 05:38:10.077211 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 27 05:38:10.077223 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 27 05:38:10.077231 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 27 05:38:10.077240 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 27 05:38:10.077248 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 27 05:38:10.077257 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 27 05:38:10.077267 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 27 05:38:10.077276 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 27 05:38:10.077285 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 27 05:38:10.077294 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 27 05:38:10.077302 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 27 05:38:10.077311 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 27 05:38:10.077319 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 27 05:38:10.077329 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 27 05:38:10.077338 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 27 05:38:10.077346 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 27 05:38:10.077355 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 27 05:38:10.077363 kernel: iommu: Default domain type: Translated Jan 27 05:38:10.077372 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 27 05:38:10.077381 kernel: efivars: Registered efivars operations Jan 27 05:38:10.077391 kernel: PCI: Using ACPI for IRQ routing Jan 27 05:38:10.077400 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 27 05:38:10.077409 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 27 05:38:10.077417 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 27 05:38:10.077425 kernel: e820: reserve RAM buffer [mem 0x7df57018-0x7fffffff] Jan 27 05:38:10.077434 kernel: e820: reserve RAM buffer [mem 0x7df7f018-0x7fffffff] Jan 27 05:38:10.077442 kernel: e820: reserve RAM buffer [mem 0x7e93f000-0x7fffffff] Jan 27 05:38:10.077452 kernel: e820: reserve RAM buffer [mem 0x7ec71000-0x7fffffff] Jan 27 05:38:10.077460 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 27 05:38:10.077469 kernel: e820: reserve RAM buffer [mem 0x7feaf000-0x7fffffff] Jan 27 05:38:10.077478 kernel: e820: reserve RAM buffer [mem 0x7feec000-0x7fffffff] Jan 27 05:38:10.077575 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 27 05:38:10.077670 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 27 05:38:10.077764 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 27 05:38:10.077778 kernel: vgaarb: loaded Jan 27 05:38:10.077786 kernel: clocksource: Switched to clocksource kvm-clock Jan 27 05:38:10.077795 kernel: VFS: Disk quotas dquot_6.6.0 Jan 27 05:38:10.077803 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 27 05:38:10.077812 kernel: pnp: PnP ACPI init Jan 27 05:38:10.077924 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 27 05:38:10.077939 kernel: pnp: PnP ACPI: found 5 devices Jan 27 05:38:10.077948 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 27 05:38:10.077957 kernel: NET: Registered PF_INET protocol family Jan 27 05:38:10.077966 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 27 05:38:10.077974 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 27 05:38:10.077983 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 27 05:38:10.077992 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 27 05:38:10.078002 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 27 05:38:10.078011 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 27 05:38:10.078019 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 27 05:38:10.078028 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 27 05:38:10.078069 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 27 05:38:10.078078 kernel: NET: Registered PF_XDP protocol family Jan 27 05:38:10.078185 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 27 05:38:10.078287 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 27 05:38:10.078389 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 27 05:38:10.078487 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 27 05:38:10.078586 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 27 05:38:10.078684 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 27 05:38:10.078786 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 27 05:38:10.078886 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 27 05:38:10.078987 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 27 05:38:10.079116 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 27 05:38:10.079216 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 27 05:38:10.079313 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 27 05:38:10.079410 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 27 05:38:10.079507 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 27 05:38:10.079611 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 27 05:38:10.079710 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 27 05:38:10.079808 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 27 05:38:10.079905 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 27 05:38:10.080002 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 27 05:38:10.080113 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 27 05:38:10.080222 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 27 05:38:10.080321 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 27 05:38:10.080418 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 27 05:38:10.080516 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 27 05:38:10.080614 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 27 05:38:10.080712 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 27 05:38:10.080812 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 27 05:38:10.080914 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 27 05:38:10.081012 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 27 05:38:10.081119 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jan 27 05:38:10.081218 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jan 27 05:38:10.081314 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jan 27 05:38:10.081409 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jan 27 05:38:10.081509 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jan 27 05:38:10.081605 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Jan 27 05:38:10.081701 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Jan 27 05:38:10.081797 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Jan 27 05:38:10.081893 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Jan 27 05:38:10.081989 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Jan 27 05:38:10.082102 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Jan 27 05:38:10.082199 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Jan 27 05:38:10.082293 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Jan 27 05:38:10.082389 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.082484 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.082579 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.082674 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.082773 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.082867 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.082983 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.083088 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.083182 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.083277 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.083375 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.083470 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.083566 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.083659 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.083756 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.083852 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.083947 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.084050 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.084152 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.084248 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.084343 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.084437 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.084531 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.084627 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.084720 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.084814 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.084916 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.085011 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.085120 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.085217 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.085312 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Jan 27 05:38:10.085406 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Jan 27 05:38:10.085500 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Jan 27 05:38:10.085594 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Jan 27 05:38:10.085688 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Jan 27 05:38:10.085783 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Jan 27 05:38:10.085880 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Jan 27 05:38:10.085974 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Jan 27 05:38:10.086082 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Jan 27 05:38:10.086178 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Jan 27 05:38:10.086271 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Jan 27 05:38:10.086365 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Jan 27 05:38:10.086462 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Jan 27 05:38:10.086557 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.086651 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.086745 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.086839 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.086932 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.087027 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.087136 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.087229 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.087324 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.087418 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.087512 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.087606 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.087699 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.087799 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.087893 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.087987 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.088089 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.088193 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.088287 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.088384 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.088477 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.088571 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.088665 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.088759 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.088853 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.088950 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.089052 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.089147 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.089240 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:38:10.089334 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 27 05:38:10.089434 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 27 05:38:10.089531 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 27 05:38:10.089630 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 27 05:38:10.089726 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:38:10.089822 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 27 05:38:10.089917 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 27 05:38:10.090010 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 27 05:38:10.090110 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:38:10.090209 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Jan 27 05:38:10.090306 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 27 05:38:10.090400 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 27 05:38:10.090494 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 27 05:38:10.090587 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 27 05:38:10.090680 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 27 05:38:10.090774 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 27 05:38:10.090869 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 27 05:38:10.090962 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 27 05:38:10.091071 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 27 05:38:10.091168 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 27 05:38:10.091262 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 27 05:38:10.091356 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 27 05:38:10.091450 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 27 05:38:10.091543 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 27 05:38:10.091637 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 27 05:38:10.091736 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 27 05:38:10.091831 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 27 05:38:10.091925 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 27 05:38:10.092018 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 27 05:38:10.092120 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 27 05:38:10.092225 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 27 05:38:10.092319 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 27 05:38:10.092413 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 27 05:38:10.092507 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 27 05:38:10.092601 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 27 05:38:10.092696 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 27 05:38:10.092790 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 27 05:38:10.092883 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 27 05:38:10.092980 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 27 05:38:10.093081 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 27 05:38:10.093176 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 27 05:38:10.093270 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 27 05:38:10.093364 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 27 05:38:10.093458 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 27 05:38:10.093551 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 27 05:38:10.093645 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 27 05:38:10.093742 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 27 05:38:10.093837 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 27 05:38:10.093930 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 27 05:38:10.094025 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 27 05:38:10.094125 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 27 05:38:10.094219 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 27 05:38:10.094316 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 27 05:38:10.094409 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 27 05:38:10.094502 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 27 05:38:10.094596 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 27 05:38:10.094690 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Jan 27 05:38:10.094784 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 27 05:38:10.094877 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 27 05:38:10.094974 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 27 05:38:10.095081 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Jan 27 05:38:10.095175 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 27 05:38:10.095268 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 27 05:38:10.095363 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 27 05:38:10.095458 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Jan 27 05:38:10.095554 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 27 05:38:10.095647 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 27 05:38:10.095741 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 27 05:38:10.095835 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Jan 27 05:38:10.095929 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 27 05:38:10.096023 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 27 05:38:10.096127 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 27 05:38:10.096230 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Jan 27 05:38:10.096324 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 27 05:38:10.096418 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 27 05:38:10.096513 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 27 05:38:10.096607 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Jan 27 05:38:10.096706 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 27 05:38:10.096801 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 27 05:38:10.096898 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 27 05:38:10.096994 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Jan 27 05:38:10.097099 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 27 05:38:10.097196 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 27 05:38:10.097296 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 27 05:38:10.097391 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Jan 27 05:38:10.097486 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 27 05:38:10.097580 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 27 05:38:10.097675 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 27 05:38:10.097770 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Jan 27 05:38:10.097868 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 27 05:38:10.097962 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 27 05:38:10.098064 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 27 05:38:10.098158 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Jan 27 05:38:10.098255 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 27 05:38:10.098351 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 27 05:38:10.098448 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 27 05:38:10.098542 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Jan 27 05:38:10.098637 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 27 05:38:10.098731 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 27 05:38:10.098828 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 27 05:38:10.098921 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Jan 27 05:38:10.099019 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 27 05:38:10.099119 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 27 05:38:10.099216 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 27 05:38:10.099311 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Jan 27 05:38:10.099405 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 27 05:38:10.099499 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 27 05:38:10.099597 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 27 05:38:10.099685 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 27 05:38:10.099774 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 27 05:38:10.099862 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 27 05:38:10.099948 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 27 05:38:10.100049 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Jan 27 05:38:10.100157 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Jan 27 05:38:10.100252 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Jan 27 05:38:10.100340 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:38:10.100436 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Jan 27 05:38:10.100528 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Jan 27 05:38:10.100619 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:38:10.100717 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Jan 27 05:38:10.100805 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 27 05:38:10.100899 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Jan 27 05:38:10.100987 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 27 05:38:10.101098 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Jan 27 05:38:10.101211 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 27 05:38:10.101305 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Jan 27 05:38:10.101394 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 27 05:38:10.101489 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Jan 27 05:38:10.101578 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 27 05:38:10.101674 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Jan 27 05:38:10.101766 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 27 05:38:10.101862 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Jan 27 05:38:10.101952 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 27 05:38:10.102052 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Jan 27 05:38:10.102143 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 27 05:38:10.102242 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Jan 27 05:38:10.102332 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 27 05:38:10.102428 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Jan 27 05:38:10.102518 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 27 05:38:10.102616 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Jan 27 05:38:10.102716 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 27 05:38:10.102811 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Jan 27 05:38:10.102901 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 27 05:38:10.102996 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Jan 27 05:38:10.103110 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 27 05:38:10.103205 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Jan 27 05:38:10.103295 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 27 05:38:10.103388 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Jan 27 05:38:10.103478 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 27 05:38:10.103573 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Jan 27 05:38:10.103665 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Jan 27 05:38:10.103753 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 27 05:38:10.103845 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Jan 27 05:38:10.103934 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Jan 27 05:38:10.104023 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 27 05:38:10.104146 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Jan 27 05:38:10.104237 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Jan 27 05:38:10.104325 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 27 05:38:10.106070 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Jan 27 05:38:10.106199 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Jan 27 05:38:10.106291 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 27 05:38:10.106396 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Jan 27 05:38:10.106486 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Jan 27 05:38:10.106574 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 27 05:38:10.106668 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Jan 27 05:38:10.106757 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Jan 27 05:38:10.106845 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 27 05:38:10.106941 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Jan 27 05:38:10.107029 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Jan 27 05:38:10.107126 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 27 05:38:10.107220 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Jan 27 05:38:10.107308 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Jan 27 05:38:10.107399 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 27 05:38:10.107492 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Jan 27 05:38:10.107581 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Jan 27 05:38:10.107670 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 27 05:38:10.107765 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Jan 27 05:38:10.107855 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Jan 27 05:38:10.107946 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 27 05:38:10.108046 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Jan 27 05:38:10.108147 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Jan 27 05:38:10.108248 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 27 05:38:10.108344 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Jan 27 05:38:10.108436 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Jan 27 05:38:10.108524 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 27 05:38:10.108617 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Jan 27 05:38:10.108706 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Jan 27 05:38:10.108793 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 27 05:38:10.108805 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 27 05:38:10.108816 kernel: PCI: CLS 0 bytes, default 64 Jan 27 05:38:10.108825 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 27 05:38:10.108833 kernel: software IO TLB: mapped [mem 0x0000000077ede000-0x000000007bede000] (64MB) Jan 27 05:38:10.108842 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 27 05:38:10.108851 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 27 05:38:10.108859 kernel: Initialise system trusted keyrings Jan 27 05:38:10.108868 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 27 05:38:10.108878 kernel: Key type asymmetric registered Jan 27 05:38:10.108887 kernel: Asymmetric key parser 'x509' registered Jan 27 05:38:10.108895 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 27 05:38:10.108903 kernel: io scheduler mq-deadline registered Jan 27 05:38:10.108912 kernel: io scheduler kyber registered Jan 27 05:38:10.108921 kernel: io scheduler bfq registered Jan 27 05:38:10.109023 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 27 05:38:10.110100 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 27 05:38:10.110233 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 27 05:38:10.110335 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 27 05:38:10.110433 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 27 05:38:10.110530 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 27 05:38:10.110632 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 27 05:38:10.110727 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 27 05:38:10.110823 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 27 05:38:10.110918 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 27 05:38:10.111013 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 27 05:38:10.112173 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 27 05:38:10.112281 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 27 05:38:10.112378 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 27 05:38:10.112476 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 27 05:38:10.112574 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 27 05:38:10.112589 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 27 05:38:10.112686 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 27 05:38:10.112782 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 27 05:38:10.112879 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Jan 27 05:38:10.112975 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Jan 27 05:38:10.113084 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Jan 27 05:38:10.113181 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Jan 27 05:38:10.113278 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Jan 27 05:38:10.113372 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Jan 27 05:38:10.113468 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Jan 27 05:38:10.113566 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Jan 27 05:38:10.113661 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Jan 27 05:38:10.113756 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Jan 27 05:38:10.113853 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Jan 27 05:38:10.113947 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Jan 27 05:38:10.115090 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Jan 27 05:38:10.115211 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Jan 27 05:38:10.115223 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 27 05:38:10.115321 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Jan 27 05:38:10.115417 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Jan 27 05:38:10.115515 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Jan 27 05:38:10.115610 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Jan 27 05:38:10.115711 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Jan 27 05:38:10.115806 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Jan 27 05:38:10.115902 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Jan 27 05:38:10.115997 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Jan 27 05:38:10.116103 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Jan 27 05:38:10.116210 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Jan 27 05:38:10.116307 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Jan 27 05:38:10.116405 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Jan 27 05:38:10.116501 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Jan 27 05:38:10.116602 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Jan 27 05:38:10.116699 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Jan 27 05:38:10.116794 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Jan 27 05:38:10.116805 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 27 05:38:10.116903 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Jan 27 05:38:10.116997 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Jan 27 05:38:10.118927 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Jan 27 05:38:10.119393 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Jan 27 05:38:10.119494 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Jan 27 05:38:10.119591 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Jan 27 05:38:10.119688 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Jan 27 05:38:10.119788 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Jan 27 05:38:10.119885 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Jan 27 05:38:10.119980 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Jan 27 05:38:10.119991 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 27 05:38:10.120000 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 27 05:38:10.120009 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 27 05:38:10.120018 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 27 05:38:10.120029 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 27 05:38:10.120050 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 27 05:38:10.120059 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 27 05:38:10.120178 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 27 05:38:10.120272 kernel: rtc_cmos 00:03: registered as rtc0 Jan 27 05:38:10.120365 kernel: rtc_cmos 00:03: setting system clock to 2026-01-27T05:38:08 UTC (1769492288) Jan 27 05:38:10.120459 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 27 05:38:10.120470 kernel: intel_pstate: CPU model not supported Jan 27 05:38:10.120478 kernel: efifb: probing for efifb Jan 27 05:38:10.120487 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 27 05:38:10.120496 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 27 05:38:10.120505 kernel: efifb: scrolling: redraw Jan 27 05:38:10.120514 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 27 05:38:10.120525 kernel: Console: switching to colour frame buffer device 160x50 Jan 27 05:38:10.120533 kernel: fb0: EFI VGA frame buffer device Jan 27 05:38:10.120542 kernel: pstore: Using crash dump compression: deflate Jan 27 05:38:10.120550 kernel: pstore: Registered efi_pstore as persistent store backend Jan 27 05:38:10.120559 kernel: NET: Registered PF_INET6 protocol family Jan 27 05:38:10.120567 kernel: Segment Routing with IPv6 Jan 27 05:38:10.120576 kernel: In-situ OAM (IOAM) with IPv6 Jan 27 05:38:10.120586 kernel: NET: Registered PF_PACKET protocol family Jan 27 05:38:10.120595 kernel: Key type dns_resolver registered Jan 27 05:38:10.120604 kernel: IPI shorthand broadcast: enabled Jan 27 05:38:10.120613 kernel: sched_clock: Marking stable (2386002927, 154265587)->(2885481363, -345212849) Jan 27 05:38:10.120621 kernel: registered taskstats version 1 Jan 27 05:38:10.120630 kernel: Loading compiled-in X.509 certificates Jan 27 05:38:10.120638 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 9e3db75de0fafb28d6cceb2e9f9c71b82c500cb9' Jan 27 05:38:10.120647 kernel: Demotion targets for Node 0: null Jan 27 05:38:10.120657 kernel: Key type .fscrypt registered Jan 27 05:38:10.120666 kernel: Key type fscrypt-provisioning registered Jan 27 05:38:10.120674 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 27 05:38:10.120682 kernel: ima: Allocated hash algorithm: sha1 Jan 27 05:38:10.120691 kernel: ima: No architecture policies found Jan 27 05:38:10.120699 kernel: clk: Disabling unused clocks Jan 27 05:38:10.120707 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 27 05:38:10.120717 kernel: Write protecting the kernel read-only data: 47104k Jan 27 05:38:10.120726 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 27 05:38:10.120734 kernel: Run /init as init process Jan 27 05:38:10.120742 kernel: with arguments: Jan 27 05:38:10.120751 kernel: /init Jan 27 05:38:10.120759 kernel: with environment: Jan 27 05:38:10.120768 kernel: HOME=/ Jan 27 05:38:10.120778 kernel: TERM=linux Jan 27 05:38:10.120786 kernel: SCSI subsystem initialized Jan 27 05:38:10.120795 kernel: libata version 3.00 loaded. Jan 27 05:38:10.120894 kernel: ahci 0000:00:1f.2: version 3.0 Jan 27 05:38:10.120906 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 27 05:38:10.121000 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 27 05:38:10.122816 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 27 05:38:10.122931 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 27 05:38:10.123059 kernel: scsi host0: ahci Jan 27 05:38:10.123164 kernel: scsi host1: ahci Jan 27 05:38:10.123287 kernel: scsi host2: ahci Jan 27 05:38:10.123388 kernel: scsi host3: ahci Jan 27 05:38:10.123492 kernel: scsi host4: ahci Jan 27 05:38:10.123595 kernel: scsi host5: ahci Jan 27 05:38:10.123608 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 55 lpm-pol 1 Jan 27 05:38:10.123617 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 55 lpm-pol 1 Jan 27 05:38:10.123626 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 55 lpm-pol 1 Jan 27 05:38:10.123635 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 55 lpm-pol 1 Jan 27 05:38:10.123646 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 55 lpm-pol 1 Jan 27 05:38:10.123655 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 55 lpm-pol 1 Jan 27 05:38:10.123664 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 27 05:38:10.123673 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 27 05:38:10.123682 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 27 05:38:10.123690 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 27 05:38:10.123699 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 27 05:38:10.123708 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 27 05:38:10.123718 kernel: ACPI: bus type USB registered Jan 27 05:38:10.123728 kernel: usbcore: registered new interface driver usbfs Jan 27 05:38:10.123736 kernel: usbcore: registered new interface driver hub Jan 27 05:38:10.123745 kernel: usbcore: registered new device driver usb Jan 27 05:38:10.123851 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Jan 27 05:38:10.123955 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Jan 27 05:38:10.124075 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Jan 27 05:38:10.124192 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Jan 27 05:38:10.124318 kernel: hub 1-0:1.0: USB hub found Jan 27 05:38:10.124426 kernel: hub 1-0:1.0: 2 ports detected Jan 27 05:38:10.124535 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 27 05:38:10.124633 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 27 05:38:10.124648 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 27 05:38:10.124657 kernel: GPT:25804799 != 104857599 Jan 27 05:38:10.124666 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 27 05:38:10.124675 kernel: GPT:25804799 != 104857599 Jan 27 05:38:10.124683 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 27 05:38:10.124692 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 27 05:38:10.124704 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 27 05:38:10.124713 kernel: device-mapper: uevent: version 1.0.3 Jan 27 05:38:10.124721 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 27 05:38:10.124730 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 27 05:38:10.124740 kernel: raid6: avx512x4 gen() 42326 MB/s Jan 27 05:38:10.124750 kernel: raid6: avx512x2 gen() 45322 MB/s Jan 27 05:38:10.124758 kernel: raid6: avx512x1 gen() 44373 MB/s Jan 27 05:38:10.124768 kernel: raid6: avx2x4 gen() 34010 MB/s Jan 27 05:38:10.124777 kernel: raid6: avx2x2 gen() 34050 MB/s Jan 27 05:38:10.124785 kernel: raid6: avx2x1 gen() 30591 MB/s Jan 27 05:38:10.124794 kernel: raid6: using algorithm avx512x2 gen() 45322 MB/s Jan 27 05:38:10.124803 kernel: raid6: .... xor() 26823 MB/s, rmw enabled Jan 27 05:38:10.124814 kernel: raid6: using avx512x2 recovery algorithm Jan 27 05:38:10.124823 kernel: xor: automatically using best checksumming function avx Jan 27 05:38:10.124946 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Jan 27 05:38:10.124960 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 27 05:38:10.124969 kernel: BTRFS: device fsid 8e29e710-4356-4007-b707-6ae7cc95ead5 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (204) Jan 27 05:38:10.124978 kernel: BTRFS info (device dm-0): first mount of filesystem 8e29e710-4356-4007-b707-6ae7cc95ead5 Jan 27 05:38:10.124987 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 27 05:38:10.124996 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 27 05:38:10.125007 kernel: BTRFS info (device dm-0): enabling free space tree Jan 27 05:38:10.125016 kernel: loop: module loaded Jan 27 05:38:10.125024 kernel: loop0: detected capacity change from 0 to 100552 Jan 27 05:38:10.126395 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 27 05:38:10.126412 systemd[1]: Successfully made /usr/ read-only. Jan 27 05:38:10.126426 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 27 05:38:10.126439 systemd[1]: Detected virtualization kvm. Jan 27 05:38:10.126449 systemd[1]: Detected architecture x86-64. Jan 27 05:38:10.126458 systemd[1]: Running in initrd. Jan 27 05:38:10.126467 systemd[1]: No hostname configured, using default hostname. Jan 27 05:38:10.126476 systemd[1]: Hostname set to . Jan 27 05:38:10.126486 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 27 05:38:10.126495 systemd[1]: Queued start job for default target initrd.target. Jan 27 05:38:10.126506 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 27 05:38:10.126515 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 05:38:10.126525 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 05:38:10.126535 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 27 05:38:10.126545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 27 05:38:10.126557 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 27 05:38:10.126566 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 27 05:38:10.126576 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 05:38:10.126585 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 27 05:38:10.126595 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 27 05:38:10.126604 systemd[1]: Reached target paths.target - Path Units. Jan 27 05:38:10.126614 systemd[1]: Reached target slices.target - Slice Units. Jan 27 05:38:10.126625 systemd[1]: Reached target swap.target - Swaps. Jan 27 05:38:10.126634 systemd[1]: Reached target timers.target - Timer Units. Jan 27 05:38:10.126643 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 27 05:38:10.126652 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 27 05:38:10.126662 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 27 05:38:10.126671 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 27 05:38:10.126681 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 27 05:38:10.126691 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 27 05:38:10.126701 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 27 05:38:10.126710 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 05:38:10.126719 systemd[1]: Reached target sockets.target - Socket Units. Jan 27 05:38:10.126729 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 27 05:38:10.126738 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 27 05:38:10.126749 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 27 05:38:10.126758 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 27 05:38:10.126768 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 27 05:38:10.126778 systemd[1]: Starting systemd-fsck-usr.service... Jan 27 05:38:10.126787 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 27 05:38:10.126796 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 27 05:38:10.126808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:38:10.126818 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 27 05:38:10.126828 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 05:38:10.126837 systemd[1]: Finished systemd-fsck-usr.service. Jan 27 05:38:10.126870 systemd-journald[342]: Collecting audit messages is enabled. Jan 27 05:38:10.126896 kernel: audit: type=1130 audit(1769492290.018:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.126906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 27 05:38:10.126918 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 27 05:38:10.126927 kernel: Bridge firewalling registered Jan 27 05:38:10.126936 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 27 05:38:10.126945 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 27 05:38:10.126955 kernel: audit: type=1130 audit(1769492290.046:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.126964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:38:10.126973 kernel: audit: type=1130 audit(1769492290.070:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.126985 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 27 05:38:10.126994 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 27 05:38:10.127004 kernel: audit: type=1130 audit(1769492290.086:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.127013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 27 05:38:10.127022 kernel: audit: type=1130 audit(1769492290.103:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.127040 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 27 05:38:10.129064 kernel: audit: type=1334 audit(1769492290.104:7): prog-id=6 op=LOAD Jan 27 05:38:10.129079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 27 05:38:10.129092 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 27 05:38:10.129106 systemd-journald[342]: Journal started Jan 27 05:38:10.129130 systemd-journald[342]: Runtime Journal (/run/log/journal/e6ff11e3d35645eea71a090042876b8c) is 8M, max 77.9M, 69.9M free. Jan 27 05:38:10.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.104000 audit: BPF prog-id=6 op=LOAD Jan 27 05:38:10.044180 systemd-modules-load[344]: Inserted module 'br_netfilter' Jan 27 05:38:10.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.135393 systemd[1]: Started systemd-journald.service - Journal Service. Jan 27 05:38:10.135415 kernel: audit: type=1130 audit(1769492290.133:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.138870 kernel: audit: type=1130 audit(1769492290.138:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.142134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 05:38:10.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.147058 kernel: audit: type=1130 audit(1769492290.142:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.149151 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 27 05:38:10.152734 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 27 05:38:10.165505 dracut-cmdline[377]: dracut-109 Jan 27 05:38:10.168620 dracut-cmdline[377]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=94a0aed2c135ea3629cf7bc829842658bafc4ce682f9974c582239b9a4f2cb9e Jan 27 05:38:10.179426 systemd-tmpfiles[378]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 27 05:38:10.181312 systemd-resolved[355]: Positive Trust Anchors: Jan 27 05:38:10.181321 systemd-resolved[355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 27 05:38:10.181324 systemd-resolved[355]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 27 05:38:10.181354 systemd-resolved[355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 27 05:38:10.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.188163 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 05:38:10.209256 systemd-resolved[355]: Defaulting to hostname 'linux'. Jan 27 05:38:10.210736 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 27 05:38:10.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.211457 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 27 05:38:10.269062 kernel: Loading iSCSI transport class v2.0-870. Jan 27 05:38:10.287065 kernel: iscsi: registered transport (tcp) Jan 27 05:38:10.311488 kernel: iscsi: registered transport (qla4xxx) Jan 27 05:38:10.311571 kernel: QLogic iSCSI HBA Driver Jan 27 05:38:10.338753 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 27 05:38:10.356517 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 05:38:10.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.358905 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 27 05:38:10.398977 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 27 05:38:10.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.400967 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 27 05:38:10.403087 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 27 05:38:10.432853 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 27 05:38:10.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.434000 audit: BPF prog-id=7 op=LOAD Jan 27 05:38:10.434000 audit: BPF prog-id=8 op=LOAD Jan 27 05:38:10.435153 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 05:38:10.462845 systemd-udevd[611]: Using default interface naming scheme 'v257'. Jan 27 05:38:10.472339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 05:38:10.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.477323 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 27 05:38:10.501814 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 27 05:38:10.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.504000 audit: BPF prog-id=9 op=LOAD Jan 27 05:38:10.505721 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 27 05:38:10.509323 dracut-pre-trigger[687]: rd.md=0: removing MD RAID activation Jan 27 05:38:10.533928 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 27 05:38:10.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.537160 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 27 05:38:10.553820 systemd-networkd[722]: lo: Link UP Jan 27 05:38:10.554468 systemd-networkd[722]: lo: Gained carrier Jan 27 05:38:10.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.554890 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 27 05:38:10.555517 systemd[1]: Reached target network.target - Network. Jan 27 05:38:10.626232 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 05:38:10.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.627871 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 27 05:38:10.731988 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 27 05:38:10.752297 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 27 05:38:10.762490 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 27 05:38:10.772337 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 27 05:38:10.775488 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 27 05:38:10.795022 disk-uuid[784]: Primary Header is updated. Jan 27 05:38:10.795022 disk-uuid[784]: Secondary Entries is updated. Jan 27 05:38:10.795022 disk-uuid[784]: Secondary Header is updated. Jan 27 05:38:10.810061 kernel: cryptd: max_cpu_qlen set to 1000 Jan 27 05:38:10.813071 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 27 05:38:10.854044 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 27 05:38:10.859791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 05:38:10.860660 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:38:10.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.862347 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:38:10.867053 kernel: usbcore: registered new interface driver usbhid Jan 27 05:38:10.867091 kernel: usbhid: USB HID core driver Jan 27 05:38:10.867277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:38:10.874523 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 05:38:10.874531 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 27 05:38:10.874852 systemd-networkd[722]: eth0: Link UP Jan 27 05:38:10.879286 systemd-networkd[722]: eth0: Gained carrier Jan 27 05:38:10.879301 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 05:38:10.899924 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 27 05:38:10.899972 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Jan 27 05:38:10.908103 systemd-networkd[722]: eth0: DHCPv4 address 10.0.7.41/25, gateway 10.0.7.1 acquired from 10.0.7.1 Jan 27 05:38:10.912646 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 05:38:10.913201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:38:10.915476 kernel: AES CTR mode by8 optimization enabled Jan 27 05:38:10.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.929543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:38:10.978494 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:38:10.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.997072 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 27 05:38:10.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:10.998861 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 27 05:38:10.999877 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 05:38:11.000796 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 27 05:38:11.002622 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 27 05:38:11.028214 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 27 05:38:11.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:11.860144 disk-uuid[785]: Warning: The kernel is still using the old partition table. Jan 27 05:38:11.860144 disk-uuid[785]: The new table will be used at the next reboot or after you Jan 27 05:38:11.860144 disk-uuid[785]: run partprobe(8) or kpartx(8) Jan 27 05:38:11.860144 disk-uuid[785]: The operation has completed successfully. Jan 27 05:38:11.874329 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 27 05:38:11.899201 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 27 05:38:11.899266 kernel: audit: type=1130 audit(1769492291.875:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:11.899309 kernel: audit: type=1131 audit(1769492291.875:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:11.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:11.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:11.874456 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 27 05:38:11.877189 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 27 05:38:11.939066 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (910) Jan 27 05:38:11.943440 kernel: BTRFS info (device vda6): first mount of filesystem 3d9bae75-48f1-4a66-8ef3-32c49c69a6d1 Jan 27 05:38:11.943588 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 05:38:11.951394 kernel: BTRFS info (device vda6): turning on async discard Jan 27 05:38:11.951460 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 05:38:11.959050 kernel: BTRFS info (device vda6): last unmount of filesystem 3d9bae75-48f1-4a66-8ef3-32c49c69a6d1 Jan 27 05:38:11.959457 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 27 05:38:11.963542 kernel: audit: type=1130 audit(1769492291.959:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:11.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:11.961186 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 27 05:38:12.171317 ignition[929]: Ignition 2.24.0 Jan 27 05:38:12.171327 ignition[929]: Stage: fetch-offline Jan 27 05:38:12.171363 ignition[929]: no configs at "/usr/lib/ignition/base.d" Jan 27 05:38:12.171372 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:38:12.177401 kernel: audit: type=1130 audit(1769492292.173:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:12.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:12.173213 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 27 05:38:12.171452 ignition[929]: parsed url from cmdline: "" Jan 27 05:38:12.171455 ignition[929]: no config URL provided Jan 27 05:38:12.171460 ignition[929]: reading system config file "/usr/lib/ignition/user.ign" Jan 27 05:38:12.178216 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 27 05:38:12.171467 ignition[929]: no config at "/usr/lib/ignition/user.ign" Jan 27 05:38:12.171471 ignition[929]: failed to fetch config: resource requires networking Jan 27 05:38:12.172227 ignition[929]: Ignition finished successfully Jan 27 05:38:12.203104 ignition[935]: Ignition 2.24.0 Jan 27 05:38:12.203793 ignition[935]: Stage: fetch Jan 27 05:38:12.203944 ignition[935]: no configs at "/usr/lib/ignition/base.d" Jan 27 05:38:12.203951 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:38:12.204048 ignition[935]: parsed url from cmdline: "" Jan 27 05:38:12.204051 ignition[935]: no config URL provided Jan 27 05:38:12.204060 ignition[935]: reading system config file "/usr/lib/ignition/user.ign" Jan 27 05:38:12.204066 ignition[935]: no config at "/usr/lib/ignition/user.ign" Jan 27 05:38:12.204148 ignition[935]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 27 05:38:12.204163 ignition[935]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 27 05:38:12.204185 ignition[935]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 27 05:38:12.753242 systemd-networkd[722]: eth0: Gained IPv6LL Jan 27 05:38:13.136528 ignition[935]: GET result: OK Jan 27 05:38:13.136631 ignition[935]: parsing config with SHA512: 3e24e9c5b199af8d76f5502a326933d37d20d9409bca2f45bf9da5bbae59a6d2cf5e22e3748713938debca737dd914b7caea538cbd2e23539ea54401a2cdaae4 Jan 27 05:38:13.144982 unknown[935]: fetched base config from "system" Jan 27 05:38:13.144992 unknown[935]: fetched base config from "system" Jan 27 05:38:13.145327 ignition[935]: fetch: fetch complete Jan 27 05:38:13.144997 unknown[935]: fetched user config from "openstack" Jan 27 05:38:13.145332 ignition[935]: fetch: fetch passed Jan 27 05:38:13.145373 ignition[935]: Ignition finished successfully Jan 27 05:38:13.153156 kernel: audit: type=1130 audit(1769492293.149:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.148458 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 27 05:38:13.150417 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 27 05:38:13.179672 ignition[941]: Ignition 2.24.0 Jan 27 05:38:13.179685 ignition[941]: Stage: kargs Jan 27 05:38:13.179857 ignition[941]: no configs at "/usr/lib/ignition/base.d" Jan 27 05:38:13.179867 ignition[941]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:38:13.180808 ignition[941]: kargs: kargs passed Jan 27 05:38:13.186053 kernel: audit: type=1130 audit(1769492293.182:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.182467 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 27 05:38:13.180852 ignition[941]: Ignition finished successfully Jan 27 05:38:13.185982 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 27 05:38:13.211547 ignition[948]: Ignition 2.24.0 Jan 27 05:38:13.211558 ignition[948]: Stage: disks Jan 27 05:38:13.211717 ignition[948]: no configs at "/usr/lib/ignition/base.d" Jan 27 05:38:13.211725 ignition[948]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:38:13.214398 ignition[948]: disks: disks passed Jan 27 05:38:13.214828 ignition[948]: Ignition finished successfully Jan 27 05:38:13.216214 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 27 05:38:13.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.217025 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 27 05:38:13.220593 kernel: audit: type=1130 audit(1769492293.216:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.220308 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 27 05:38:13.220941 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 27 05:38:13.221583 systemd[1]: Reached target sysinit.target - System Initialization. Jan 27 05:38:13.222216 systemd[1]: Reached target basic.target - Basic System. Jan 27 05:38:13.223771 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 27 05:38:13.273894 systemd-fsck[957]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 27 05:38:13.276957 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 27 05:38:13.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.280825 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 27 05:38:13.284661 kernel: audit: type=1130 audit(1769492293.278:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.443056 kernel: EXT4-fs (vda9): mounted filesystem a9099a9f-29a1-43d8-a05a-53a191872646 r/w with ordered data mode. Quota mode: none. Jan 27 05:38:13.443992 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 27 05:38:13.445404 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 27 05:38:13.448518 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 27 05:38:13.451132 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 27 05:38:13.452234 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 27 05:38:13.457114 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 27 05:38:13.457625 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 27 05:38:13.457654 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 27 05:38:13.463850 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 27 05:38:13.466074 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 27 05:38:13.479063 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Jan 27 05:38:13.484200 kernel: BTRFS info (device vda6): first mount of filesystem 3d9bae75-48f1-4a66-8ef3-32c49c69a6d1 Jan 27 05:38:13.484282 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 05:38:13.493416 kernel: BTRFS info (device vda6): turning on async discard Jan 27 05:38:13.493496 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 05:38:13.494295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 27 05:38:13.556061 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:13.688701 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 27 05:38:13.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.691148 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 27 05:38:13.695624 kernel: audit: type=1130 audit(1769492293.689:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.695836 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 27 05:38:13.717298 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 27 05:38:13.720050 kernel: BTRFS info (device vda6): last unmount of filesystem 3d9bae75-48f1-4a66-8ef3-32c49c69a6d1 Jan 27 05:38:13.743613 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 27 05:38:13.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.748645 ignition[1066]: INFO : Ignition 2.24.0 Jan 27 05:38:13.748645 ignition[1066]: INFO : Stage: mount Jan 27 05:38:13.748645 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 05:38:13.748645 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:38:13.748645 ignition[1066]: INFO : mount: mount passed Jan 27 05:38:13.748645 ignition[1066]: INFO : Ignition finished successfully Jan 27 05:38:13.751903 kernel: audit: type=1130 audit(1769492293.745:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:13.749952 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 27 05:38:14.598068 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:16.611551 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:20.628096 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:20.633015 coreos-metadata[967]: Jan 27 05:38:20.632 WARN failed to locate config-drive, using the metadata service API instead Jan 27 05:38:20.660668 coreos-metadata[967]: Jan 27 05:38:20.660 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 27 05:38:21.291867 coreos-metadata[967]: Jan 27 05:38:21.291 INFO Fetch successful Jan 27 05:38:21.293666 coreos-metadata[967]: Jan 27 05:38:21.293 INFO wrote hostname ci-4592-0-0-n-eb4c5d05b1 to /sysroot/etc/hostname Jan 27 05:38:21.295781 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 27 05:38:21.324366 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:38:21.324444 kernel: audit: type=1130 audit(1769492301.296:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:21.324482 kernel: audit: type=1131 audit(1769492301.296:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:21.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:21.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:21.295950 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 27 05:38:21.298684 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 27 05:38:21.345116 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 27 05:38:21.390126 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1083) Jan 27 05:38:21.397635 kernel: BTRFS info (device vda6): first mount of filesystem 3d9bae75-48f1-4a66-8ef3-32c49c69a6d1 Jan 27 05:38:21.397685 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 05:38:21.409384 kernel: BTRFS info (device vda6): turning on async discard Jan 27 05:38:21.409437 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 05:38:21.412240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 27 05:38:21.449917 ignition[1100]: INFO : Ignition 2.24.0 Jan 27 05:38:21.449917 ignition[1100]: INFO : Stage: files Jan 27 05:38:21.451622 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 05:38:21.451622 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:38:21.451622 ignition[1100]: DEBUG : files: compiled without relabeling support, skipping Jan 27 05:38:21.453443 ignition[1100]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 27 05:38:21.453443 ignition[1100]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 27 05:38:21.461998 ignition[1100]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 27 05:38:21.462623 ignition[1100]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 27 05:38:21.462623 ignition[1100]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 27 05:38:21.462360 unknown[1100]: wrote ssh authorized keys file for user: core Jan 27 05:38:21.466323 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 27 05:38:21.467407 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 27 05:38:21.528366 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 27 05:38:21.651450 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 27 05:38:21.651450 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 27 05:38:21.651450 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 27 05:38:21.651450 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 27 05:38:21.651450 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 27 05:38:21.651450 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 27 05:38:21.651450 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 27 05:38:21.651450 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 27 05:38:21.651450 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 27 05:38:21.656534 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 27 05:38:21.656534 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 27 05:38:21.656534 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 27 05:38:21.656534 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 27 05:38:21.656534 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 27 05:38:21.656534 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 27 05:38:22.017963 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 27 05:38:24.093741 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 27 05:38:24.093741 ignition[1100]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 27 05:38:24.097429 ignition[1100]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 27 05:38:24.100641 ignition[1100]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 27 05:38:24.100641 ignition[1100]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 27 05:38:24.100641 ignition[1100]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 27 05:38:24.100641 ignition[1100]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 27 05:38:24.108238 kernel: audit: type=1130 audit(1769492304.102:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.108311 ignition[1100]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 27 05:38:24.108311 ignition[1100]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 27 05:38:24.108311 ignition[1100]: INFO : files: files passed Jan 27 05:38:24.108311 ignition[1100]: INFO : Ignition finished successfully Jan 27 05:38:24.102074 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 27 05:38:24.103949 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 27 05:38:24.109815 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 27 05:38:24.122239 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 27 05:38:24.123080 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 27 05:38:24.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.130861 kernel: audit: type=1130 audit(1769492304.124:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.130901 kernel: audit: type=1131 audit(1769492304.124:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.135568 initrd-setup-root-after-ignition[1132]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 27 05:38:24.135568 initrd-setup-root-after-ignition[1132]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 27 05:38:24.138104 initrd-setup-root-after-ignition[1136]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 27 05:38:24.139826 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 27 05:38:24.144653 kernel: audit: type=1130 audit(1769492304.140:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.140750 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 27 05:38:24.145999 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 27 05:38:24.182833 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 27 05:38:24.191017 kernel: audit: type=1130 audit(1769492304.183:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.191055 kernel: audit: type=1131 audit(1769492304.183:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.182927 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 27 05:38:24.184135 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 27 05:38:24.191567 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 27 05:38:24.192725 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 27 05:38:24.193836 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 27 05:38:24.230313 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 27 05:38:24.234812 kernel: audit: type=1130 audit(1769492304.230:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.233156 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 27 05:38:24.254884 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 27 05:38:24.255009 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 27 05:38:24.256491 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 05:38:24.257593 systemd[1]: Stopped target timers.target - Timer Units. Jan 27 05:38:24.258527 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 27 05:38:24.263254 kernel: audit: type=1131 audit(1769492304.259:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.258647 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 27 05:38:24.263385 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 27 05:38:24.264415 systemd[1]: Stopped target basic.target - Basic System. Jan 27 05:38:24.265363 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 27 05:38:24.266321 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 27 05:38:24.267394 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 27 05:38:24.268372 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 27 05:38:24.269313 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 27 05:38:24.270188 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 27 05:38:24.271069 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 27 05:38:24.271984 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 27 05:38:24.272876 systemd[1]: Stopped target swap.target - Swaps. Jan 27 05:38:24.273742 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 27 05:38:24.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.273861 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 27 05:38:24.275081 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 27 05:38:24.276016 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 05:38:24.276789 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 27 05:38:24.276865 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 05:38:24.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.277645 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 27 05:38:24.277745 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 27 05:38:24.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.279007 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 27 05:38:24.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.279107 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 27 05:38:24.279903 systemd[1]: ignition-files.service: Deactivated successfully. Jan 27 05:38:24.280009 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 27 05:38:24.283202 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 27 05:38:24.283629 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 27 05:38:24.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.283733 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 05:38:24.286194 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 27 05:38:24.286612 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 27 05:38:24.286713 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 05:38:24.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.289922 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 27 05:38:24.290539 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 05:38:24.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.291560 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 27 05:38:24.292484 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 27 05:38:24.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.296701 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 27 05:38:24.297222 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 27 05:38:24.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.310449 ignition[1156]: INFO : Ignition 2.24.0 Jan 27 05:38:24.310449 ignition[1156]: INFO : Stage: umount Jan 27 05:38:24.312095 ignition[1156]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 05:38:24.312095 ignition[1156]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:38:24.312095 ignition[1156]: INFO : umount: umount passed Jan 27 05:38:24.312095 ignition[1156]: INFO : Ignition finished successfully Jan 27 05:38:24.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.313100 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 27 05:38:24.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.313626 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 27 05:38:24.315991 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 27 05:38:24.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.316580 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 27 05:38:24.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.318026 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 27 05:38:24.318085 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 27 05:38:24.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.318504 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 27 05:38:24.318542 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 27 05:38:24.320509 systemd[1]: Stopped target network.target - Network. Jan 27 05:38:24.321155 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 27 05:38:24.321199 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 27 05:38:24.321898 systemd[1]: Stopped target paths.target - Path Units. Jan 27 05:38:24.322566 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 27 05:38:24.324103 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 05:38:24.324771 systemd[1]: Stopped target slices.target - Slice Units. Jan 27 05:38:24.325441 systemd[1]: Stopped target sockets.target - Socket Units. Jan 27 05:38:24.326214 systemd[1]: iscsid.socket: Deactivated successfully. Jan 27 05:38:24.326265 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 27 05:38:24.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.326886 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 27 05:38:24.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.326920 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 27 05:38:24.327615 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 27 05:38:24.327641 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 27 05:38:24.328313 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 27 05:38:24.328361 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 27 05:38:24.329003 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 27 05:38:24.329055 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 27 05:38:24.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.329775 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 27 05:38:24.330398 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 27 05:38:24.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.333320 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 27 05:38:24.333867 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 27 05:38:24.333953 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 27 05:38:24.335306 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 27 05:38:24.335356 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 27 05:38:24.339862 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 27 05:38:24.340057 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 27 05:38:24.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.341649 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 27 05:38:24.341736 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 27 05:38:24.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.344377 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 27 05:38:24.345000 audit: BPF prog-id=6 op=UNLOAD Jan 27 05:38:24.344886 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 27 05:38:24.345000 audit: BPF prog-id=9 op=UNLOAD Jan 27 05:38:24.344929 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 27 05:38:24.346273 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 27 05:38:24.347414 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 27 05:38:24.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.347466 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 27 05:38:24.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.349141 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 27 05:38:24.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.349185 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 27 05:38:24.349844 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 27 05:38:24.349881 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 27 05:38:24.350548 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 05:38:24.361558 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 27 05:38:24.362178 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 05:38:24.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.362779 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 27 05:38:24.362821 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 27 05:38:24.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.364424 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 27 05:38:24.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.364455 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 05:38:24.364808 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 27 05:38:24.364851 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 27 05:38:24.365369 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 27 05:38:24.365402 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 27 05:38:24.366267 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 27 05:38:24.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.366301 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 27 05:38:24.369598 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 27 05:38:24.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.369978 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 27 05:38:24.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.370027 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 05:38:24.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.370465 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 27 05:38:24.370506 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 05:38:24.372500 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 05:38:24.372539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:38:24.382833 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 27 05:38:24.382937 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 27 05:38:24.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.387770 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 27 05:38:24.387888 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 27 05:38:24.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:24.388996 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 27 05:38:24.391203 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 27 05:38:24.408139 systemd[1]: Switching root. Jan 27 05:38:24.436021 systemd-journald[342]: Journal stopped Jan 27 05:38:25.681332 systemd-journald[342]: Received SIGTERM from PID 1 (systemd). Jan 27 05:38:25.681428 kernel: SELinux: policy capability network_peer_controls=1 Jan 27 05:38:25.681445 kernel: SELinux: policy capability open_perms=1 Jan 27 05:38:25.681460 kernel: SELinux: policy capability extended_socket_class=1 Jan 27 05:38:25.681476 kernel: SELinux: policy capability always_check_network=0 Jan 27 05:38:25.681487 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 27 05:38:25.681504 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 27 05:38:25.681515 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 27 05:38:25.681531 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 27 05:38:25.681542 kernel: SELinux: policy capability userspace_initial_context=0 Jan 27 05:38:25.681555 systemd[1]: Successfully loaded SELinux policy in 56.786ms. Jan 27 05:38:25.681581 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.895ms. Jan 27 05:38:25.681594 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 27 05:38:25.681612 systemd[1]: Detected virtualization kvm. Jan 27 05:38:25.681624 systemd[1]: Detected architecture x86-64. Jan 27 05:38:25.681637 systemd[1]: Detected first boot. Jan 27 05:38:25.681651 systemd[1]: Hostname set to . Jan 27 05:38:25.681663 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 27 05:38:25.681674 zram_generator::config[1201]: No configuration found. Jan 27 05:38:25.681696 kernel: Guest personality initialized and is inactive Jan 27 05:38:25.681711 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 27 05:38:25.681722 kernel: Initialized host personality Jan 27 05:38:25.681735 kernel: NET: Registered PF_VSOCK protocol family Jan 27 05:38:25.681749 systemd[1]: Populated /etc with preset unit settings. Jan 27 05:38:25.681765 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 27 05:38:25.681777 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 27 05:38:25.681789 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 27 05:38:25.681806 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 27 05:38:25.681820 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 27 05:38:25.681832 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 27 05:38:25.681846 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 27 05:38:25.681857 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 27 05:38:25.681869 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 27 05:38:25.681882 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 27 05:38:25.681893 systemd[1]: Created slice user.slice - User and Session Slice. Jan 27 05:38:25.681907 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 05:38:25.681919 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 05:38:25.681933 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 27 05:38:25.681945 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 27 05:38:25.681956 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 27 05:38:25.681967 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 27 05:38:25.681979 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 27 05:38:25.681994 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 05:38:25.682006 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 27 05:38:25.682023 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 27 05:38:25.687451 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 27 05:38:25.687478 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 27 05:38:25.687491 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 27 05:38:25.687503 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 05:38:25.687521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 27 05:38:25.687533 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 27 05:38:25.687544 systemd[1]: Reached target slices.target - Slice Units. Jan 27 05:38:25.687557 systemd[1]: Reached target swap.target - Swaps. Jan 27 05:38:25.687569 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 27 05:38:25.687582 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 27 05:38:25.687593 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 27 05:38:25.687608 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 27 05:38:25.687621 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 27 05:38:25.687633 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 27 05:38:25.687645 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 27 05:38:25.687658 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 27 05:38:25.687675 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 27 05:38:25.687688 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 05:38:25.687701 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 27 05:38:25.687713 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 27 05:38:25.687725 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 27 05:38:25.687736 systemd[1]: Mounting media.mount - External Media Directory... Jan 27 05:38:25.687748 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:38:25.687763 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 27 05:38:25.687774 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 27 05:38:25.687788 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 27 05:38:25.687800 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 27 05:38:25.687816 systemd[1]: Reached target machines.target - Containers. Jan 27 05:38:25.687828 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 27 05:38:25.687840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 05:38:25.687852 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 27 05:38:25.687863 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 27 05:38:25.687881 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 27 05:38:25.687909 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 27 05:38:25.687921 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 27 05:38:25.687934 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 27 05:38:25.687946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 27 05:38:25.687958 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 27 05:38:25.687969 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 27 05:38:25.687981 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 27 05:38:25.687992 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 27 05:38:25.688004 systemd[1]: Stopped systemd-fsck-usr.service. Jan 27 05:38:25.688018 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 05:38:25.692045 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 27 05:38:25.692088 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 27 05:38:25.692101 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 27 05:38:25.692114 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 27 05:38:25.692127 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 27 05:38:25.692140 kernel: fuse: init (API version 7.41) Jan 27 05:38:25.692160 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 27 05:38:25.692173 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:38:25.692186 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 27 05:38:25.692199 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 27 05:38:25.692215 systemd[1]: Mounted media.mount - External Media Directory. Jan 27 05:38:25.692226 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 27 05:38:25.692238 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 27 05:38:25.692250 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 27 05:38:25.692262 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 05:38:25.692275 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 27 05:38:25.692287 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 27 05:38:25.692301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 27 05:38:25.692316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 27 05:38:25.692328 kernel: ACPI: bus type drm_connector registered Jan 27 05:38:25.692339 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 27 05:38:25.692351 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 27 05:38:25.692365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 27 05:38:25.692382 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 27 05:38:25.692397 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 27 05:38:25.692409 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 27 05:38:25.692422 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 27 05:38:25.692434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 27 05:38:25.692445 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 27 05:38:25.692457 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 27 05:38:25.692473 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 27 05:38:25.692485 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 27 05:38:25.692499 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 27 05:38:25.692512 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 27 05:38:25.692524 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 27 05:38:25.692536 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 27 05:38:25.692548 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 27 05:38:25.692561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 05:38:25.692576 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 05:38:25.692588 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 27 05:38:25.692631 systemd-journald[1274]: Collecting audit messages is enabled. Jan 27 05:38:25.692660 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 27 05:38:25.692672 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 27 05:38:25.692684 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 27 05:38:25.692698 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 27 05:38:25.692711 systemd-journald[1274]: Journal started Jan 27 05:38:25.692739 systemd-journald[1274]: Runtime Journal (/run/log/journal/e6ff11e3d35645eea71a090042876b8c) is 8M, max 77.9M, 69.9M free. Jan 27 05:38:25.407000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 27 05:38:25.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.515000 audit: BPF prog-id=14 op=UNLOAD Jan 27 05:38:25.515000 audit: BPF prog-id=13 op=UNLOAD Jan 27 05:38:25.516000 audit: BPF prog-id=15 op=LOAD Jan 27 05:38:25.516000 audit: BPF prog-id=16 op=LOAD Jan 27 05:38:25.516000 audit: BPF prog-id=17 op=LOAD Jan 27 05:38:25.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.675000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 27 05:38:25.675000 audit[1274]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc22622a80 a2=4000 a3=0 items=0 ppid=1 pid=1274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:38:25.675000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 27 05:38:25.329515 systemd[1]: Queued start job for default target multi-user.target. Jan 27 05:38:25.696092 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 27 05:38:25.349234 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 27 05:38:25.349691 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 27 05:38:25.707727 systemd[1]: Started systemd-journald.service - Journal Service. Jan 27 05:38:25.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.700801 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 05:38:25.703666 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 27 05:38:25.704463 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 27 05:38:25.718252 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 27 05:38:25.720195 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 27 05:38:25.734056 kernel: loop1: detected capacity change from 0 to 50784 Jan 27 05:38:25.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.733387 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 27 05:38:25.734263 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 27 05:38:25.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.735432 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 27 05:38:25.740788 systemd-journald[1274]: Time spent on flushing to /var/log/journal/e6ff11e3d35645eea71a090042876b8c is 91.701ms for 1848 entries. Jan 27 05:38:25.740788 systemd-journald[1274]: System Journal (/var/log/journal/e6ff11e3d35645eea71a090042876b8c) is 8M, max 588.1M, 580.1M free. Jan 27 05:38:26.104570 systemd-journald[1274]: Received client request to flush runtime journal. Jan 27 05:38:26.104649 kernel: loop2: detected capacity change from 0 to 111560 Jan 27 05:38:25.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:25.741323 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 27 05:38:25.744198 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 27 05:38:25.775626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 27 05:38:25.793574 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 05:38:26.110639 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 27 05:38:26.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.113221 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 27 05:38:26.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.116000 audit: BPF prog-id=18 op=LOAD Jan 27 05:38:26.116000 audit: BPF prog-id=19 op=LOAD Jan 27 05:38:26.116000 audit: BPF prog-id=20 op=LOAD Jan 27 05:38:26.118308 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 27 05:38:26.120000 audit: BPF prog-id=21 op=LOAD Jan 27 05:38:26.123159 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 27 05:38:26.126241 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 27 05:38:26.128141 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 27 05:38:26.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.139000 audit: BPF prog-id=22 op=LOAD Jan 27 05:38:26.139000 audit: BPF prog-id=23 op=LOAD Jan 27 05:38:26.139000 audit: BPF prog-id=24 op=LOAD Jan 27 05:38:26.139976 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 27 05:38:26.142000 audit: BPF prog-id=25 op=LOAD Jan 27 05:38:26.142000 audit: BPF prog-id=26 op=LOAD Jan 27 05:38:26.142000 audit: BPF prog-id=27 op=LOAD Jan 27 05:38:26.143274 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 27 05:38:26.155094 kernel: loop3: detected capacity change from 0 to 224512 Jan 27 05:38:26.171839 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Jan 27 05:38:26.171853 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Jan 27 05:38:26.179170 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 05:38:26.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.203385 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 27 05:38:26.210773 systemd-nsresourced[1348]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 27 05:38:26.211943 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 27 05:38:26.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.224117 kernel: loop4: detected capacity change from 0 to 1656 Jan 27 05:38:26.256065 kernel: loop5: detected capacity change from 0 to 50784 Jan 27 05:38:26.282056 kernel: loop6: detected capacity change from 0 to 111560 Jan 27 05:38:26.284293 systemd-oomd[1343]: No swap; memory pressure usage will be degraded Jan 27 05:38:26.285138 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 27 05:38:26.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.308735 systemd-resolved[1344]: Positive Trust Anchors: Jan 27 05:38:26.309014 systemd-resolved[1344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 27 05:38:26.309020 systemd-resolved[1344]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 27 05:38:26.309063 systemd-resolved[1344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 27 05:38:26.321317 kernel: loop7: detected capacity change from 0 to 224512 Jan 27 05:38:26.331262 systemd-resolved[1344]: Using system hostname 'ci-4592-0-0-n-eb4c5d05b1'. Jan 27 05:38:26.332956 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 27 05:38:26.337574 kernel: kauditd_printk_skb: 102 callbacks suppressed Jan 27 05:38:26.337686 kernel: audit: type=1130 audit(1769492306.333:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.335958 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 27 05:38:26.351154 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 27 05:38:26.383453 kernel: audit: type=1130 audit(1769492306.379:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.378471 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 27 05:38:26.381000 audit: BPF prog-id=8 op=UNLOAD Jan 27 05:38:26.385863 kernel: audit: type=1334 audit(1769492306.381:153): prog-id=8 op=UNLOAD Jan 27 05:38:26.385906 kernel: audit: type=1334 audit(1769492306.381:154): prog-id=7 op=UNLOAD Jan 27 05:38:26.381000 audit: BPF prog-id=7 op=UNLOAD Jan 27 05:38:26.387049 kernel: audit: type=1334 audit(1769492306.383:155): prog-id=28 op=LOAD Jan 27 05:38:26.383000 audit: BPF prog-id=28 op=LOAD Jan 27 05:38:26.386365 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 05:38:26.383000 audit: BPF prog-id=29 op=LOAD Jan 27 05:38:26.388055 kernel: audit: type=1334 audit(1769492306.383:156): prog-id=29 op=LOAD Jan 27 05:38:26.407064 kernel: loop1: detected capacity change from 0 to 1656 Jan 27 05:38:26.415150 (sd-merge)[1370]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Jan 27 05:38:26.419003 (sd-merge)[1370]: Merged extensions into '/usr'. Jan 27 05:38:26.424144 systemd[1]: Reload requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Jan 27 05:38:26.424166 systemd[1]: Reloading... Jan 27 05:38:26.432273 systemd-udevd[1372]: Using default interface naming scheme 'v257'. Jan 27 05:38:26.506069 zram_generator::config[1401]: No configuration found. Jan 27 05:38:26.569055 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 27 05:38:26.569134 kernel: mousedev: PS/2 mouse device common for all mice Jan 27 05:38:26.579054 kernel: ACPI: button: Power Button [PWRF] Jan 27 05:38:26.630056 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 27 05:38:26.631054 kernel: Console: switching to colour dummy device 80x25 Jan 27 05:38:26.633144 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 27 05:38:26.633391 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 27 05:38:26.633414 kernel: [drm] features: -context_init Jan 27 05:38:26.637372 kernel: [drm] number of scanouts: 1 Jan 27 05:38:26.637431 kernel: [drm] number of cap sets: 0 Jan 27 05:38:26.639051 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 27 05:38:26.644047 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 27 05:38:26.645052 kernel: Console: switching to colour frame buffer device 160x50 Jan 27 05:38:26.650055 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 27 05:38:26.737047 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 27 05:38:26.737331 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 27 05:38:26.739047 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 27 05:38:26.808157 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 27 05:38:26.808856 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 27 05:38:26.808909 systemd[1]: Reloading finished in 384 ms. Jan 27 05:38:26.825701 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 05:38:26.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.832062 kernel: audit: type=1130 audit(1769492306.828:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.832760 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 27 05:38:26.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.840059 kernel: audit: type=1130 audit(1769492306.835:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:26.883285 systemd[1]: Starting ensure-sysext.service... Jan 27 05:38:26.888000 audit: BPF prog-id=30 op=LOAD Jan 27 05:38:26.886791 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 27 05:38:26.890062 kernel: audit: type=1334 audit(1769492306.888:159): prog-id=30 op=LOAD Jan 27 05:38:26.892288 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 27 05:38:26.896257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 27 05:38:26.902000 audit: BPF prog-id=31 op=LOAD Jan 27 05:38:26.902000 audit: BPF prog-id=15 op=UNLOAD Jan 27 05:38:26.902000 audit: BPF prog-id=32 op=LOAD Jan 27 05:38:26.902000 audit: BPF prog-id=33 op=LOAD Jan 27 05:38:26.902000 audit: BPF prog-id=16 op=UNLOAD Jan 27 05:38:26.902000 audit: BPF prog-id=17 op=UNLOAD Jan 27 05:38:26.902000 audit: BPF prog-id=34 op=LOAD Jan 27 05:38:26.902000 audit: BPF prog-id=18 op=UNLOAD Jan 27 05:38:26.902000 audit: BPF prog-id=35 op=LOAD Jan 27 05:38:26.904076 kernel: audit: type=1334 audit(1769492306.902:160): prog-id=31 op=LOAD Jan 27 05:38:26.904000 audit: BPF prog-id=36 op=LOAD Jan 27 05:38:26.904000 audit: BPF prog-id=19 op=UNLOAD Jan 27 05:38:26.904000 audit: BPF prog-id=20 op=UNLOAD Jan 27 05:38:26.904000 audit: BPF prog-id=37 op=LOAD Jan 27 05:38:26.904000 audit: BPF prog-id=21 op=UNLOAD Jan 27 05:38:26.905000 audit: BPF prog-id=38 op=LOAD Jan 27 05:38:26.905000 audit: BPF prog-id=39 op=LOAD Jan 27 05:38:26.905000 audit: BPF prog-id=28 op=UNLOAD Jan 27 05:38:26.905000 audit: BPF prog-id=29 op=UNLOAD Jan 27 05:38:26.905000 audit: BPF prog-id=40 op=LOAD Jan 27 05:38:26.905000 audit: BPF prog-id=22 op=UNLOAD Jan 27 05:38:26.905000 audit: BPF prog-id=41 op=LOAD Jan 27 05:38:26.905000 audit: BPF prog-id=42 op=LOAD Jan 27 05:38:26.905000 audit: BPF prog-id=23 op=UNLOAD Jan 27 05:38:26.905000 audit: BPF prog-id=24 op=UNLOAD Jan 27 05:38:26.906000 audit: BPF prog-id=43 op=LOAD Jan 27 05:38:26.906000 audit: BPF prog-id=25 op=UNLOAD Jan 27 05:38:26.906000 audit: BPF prog-id=44 op=LOAD Jan 27 05:38:26.906000 audit: BPF prog-id=45 op=LOAD Jan 27 05:38:26.906000 audit: BPF prog-id=26 op=UNLOAD Jan 27 05:38:26.906000 audit: BPF prog-id=27 op=UNLOAD Jan 27 05:38:26.918148 systemd[1]: Reload requested from client PID 1487 ('systemctl') (unit ensure-sysext.service)... Jan 27 05:38:26.918158 systemd[1]: Reloading... Jan 27 05:38:26.950144 systemd-tmpfiles[1490]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 27 05:38:26.950168 systemd-tmpfiles[1490]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 27 05:38:26.951485 systemd-tmpfiles[1490]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 27 05:38:26.953530 systemd-tmpfiles[1490]: ACLs are not supported, ignoring. Jan 27 05:38:26.953578 systemd-tmpfiles[1490]: ACLs are not supported, ignoring. Jan 27 05:38:26.966060 zram_generator::config[1522]: No configuration found. Jan 27 05:38:26.965064 systemd-tmpfiles[1490]: Detected autofs mount point /boot during canonicalization of boot. Jan 27 05:38:26.965071 systemd-tmpfiles[1490]: Skipping /boot Jan 27 05:38:26.974401 systemd-tmpfiles[1490]: Detected autofs mount point /boot during canonicalization of boot. Jan 27 05:38:26.974413 systemd-tmpfiles[1490]: Skipping /boot Jan 27 05:38:27.041559 systemd-networkd[1489]: lo: Link UP Jan 27 05:38:27.041826 systemd-networkd[1489]: lo: Gained carrier Jan 27 05:38:27.044327 systemd-networkd[1489]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 05:38:27.044334 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 27 05:38:27.044892 systemd-networkd[1489]: eth0: Link UP Jan 27 05:38:27.045100 systemd-networkd[1489]: eth0: Gained carrier Jan 27 05:38:27.045163 systemd-networkd[1489]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 05:38:27.057251 systemd-networkd[1489]: eth0: DHCPv4 address 10.0.7.41/25, gateway 10.0.7.1 acquired from 10.0.7.1 Jan 27 05:38:27.223158 systemd[1]: Reloading finished in 304 ms. Jan 27 05:38:27.252191 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 27 05:38:27.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.254000 audit: BPF prog-id=46 op=LOAD Jan 27 05:38:27.254000 audit: BPF prog-id=34 op=UNLOAD Jan 27 05:38:27.254000 audit: BPF prog-id=47 op=LOAD Jan 27 05:38:27.254000 audit: BPF prog-id=48 op=LOAD Jan 27 05:38:27.254000 audit: BPF prog-id=35 op=UNLOAD Jan 27 05:38:27.254000 audit: BPF prog-id=36 op=UNLOAD Jan 27 05:38:27.255000 audit: BPF prog-id=49 op=LOAD Jan 27 05:38:27.255000 audit: BPF prog-id=30 op=UNLOAD Jan 27 05:38:27.255000 audit: BPF prog-id=50 op=LOAD Jan 27 05:38:27.255000 audit: BPF prog-id=51 op=LOAD Jan 27 05:38:27.255000 audit: BPF prog-id=38 op=UNLOAD Jan 27 05:38:27.255000 audit: BPF prog-id=39 op=UNLOAD Jan 27 05:38:27.256000 audit: BPF prog-id=52 op=LOAD Jan 27 05:38:27.256000 audit: BPF prog-id=31 op=UNLOAD Jan 27 05:38:27.256000 audit: BPF prog-id=53 op=LOAD Jan 27 05:38:27.256000 audit: BPF prog-id=54 op=LOAD Jan 27 05:38:27.256000 audit: BPF prog-id=32 op=UNLOAD Jan 27 05:38:27.256000 audit: BPF prog-id=33 op=UNLOAD Jan 27 05:38:27.256000 audit: BPF prog-id=55 op=LOAD Jan 27 05:38:27.256000 audit: BPF prog-id=40 op=UNLOAD Jan 27 05:38:27.256000 audit: BPF prog-id=56 op=LOAD Jan 27 05:38:27.256000 audit: BPF prog-id=57 op=LOAD Jan 27 05:38:27.256000 audit: BPF prog-id=41 op=UNLOAD Jan 27 05:38:27.256000 audit: BPF prog-id=42 op=UNLOAD Jan 27 05:38:27.257000 audit: BPF prog-id=58 op=LOAD Jan 27 05:38:27.257000 audit: BPF prog-id=37 op=UNLOAD Jan 27 05:38:27.257000 audit: BPF prog-id=59 op=LOAD Jan 27 05:38:27.257000 audit: BPF prog-id=43 op=UNLOAD Jan 27 05:38:27.257000 audit: BPF prog-id=60 op=LOAD Jan 27 05:38:27.257000 audit: BPF prog-id=61 op=LOAD Jan 27 05:38:27.257000 audit: BPF prog-id=44 op=UNLOAD Jan 27 05:38:27.257000 audit: BPF prog-id=45 op=UNLOAD Jan 27 05:38:27.263563 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 27 05:38:27.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.265732 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 05:38:27.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.283568 systemd[1]: Reached target network.target - Network. Jan 27 05:38:27.285385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:38:27.288372 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 27 05:38:27.291219 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 27 05:38:27.295873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 05:38:27.303470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 27 05:38:27.307227 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 27 05:38:27.310328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 27 05:38:27.311990 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 05:38:27.312190 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 05:38:27.318677 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 27 05:38:27.320136 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 05:38:27.322248 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 27 05:38:27.325821 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 27 05:38:27.334778 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 27 05:38:27.340650 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 27 05:38:27.342469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:38:27.344771 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:38:27.347590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 27 05:38:27.347814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 27 05:38:27.355921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 27 05:38:27.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.356122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 27 05:38:27.356940 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 27 05:38:27.362234 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 27 05:38:27.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.372124 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:38:27.372393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 05:38:27.380000 audit[1591]: SYSTEM_BOOT pid=1591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.382325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 27 05:38:27.398859 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 27 05:38:27.415233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 27 05:38:27.425549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 27 05:38:27.431593 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 27 05:38:27.432733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 05:38:27.432943 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 05:38:27.433086 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 05:38:27.433302 systemd[1]: Reached target time-set.target - System Time Set. Jan 27 05:38:27.434146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:38:27.437810 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 27 05:38:27.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.440641 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 05:38:27.440886 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:38:27.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:27.443000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 27 05:38:27.443000 audit[1614]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd48ff8e90 a2=420 a3=0 items=0 ppid=1573 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:38:27.443000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 05:38:27.443644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 27 05:38:27.445178 augenrules[1614]: No rules Jan 27 05:38:27.443817 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 27 05:38:27.446213 systemd[1]: audit-rules.service: Deactivated successfully. Jan 27 05:38:27.446416 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 27 05:38:27.447440 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 27 05:38:27.451779 systemd[1]: Finished ensure-sysext.service. Jan 27 05:38:27.455723 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 27 05:38:27.457504 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 27 05:38:27.465000 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 27 05:38:27.465255 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 27 05:38:27.470178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 27 05:38:27.471533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 27 05:38:27.475184 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 27 05:38:27.481843 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 27 05:38:27.481912 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 27 05:38:27.478337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 27 05:38:27.478435 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 27 05:38:27.480100 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:38:27.490168 kernel: PTP clock support registered Jan 27 05:38:27.494355 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 27 05:38:27.494569 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 27 05:38:27.535991 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 27 05:38:27.538300 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 27 05:38:27.576632 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:38:28.148478 ldconfig[1583]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 27 05:38:28.154267 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 27 05:38:28.156170 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 27 05:38:28.178049 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 27 05:38:28.179678 systemd[1]: Reached target sysinit.target - System Initialization. Jan 27 05:38:28.180155 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 27 05:38:28.180673 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 27 05:38:28.181131 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 27 05:38:28.181696 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 27 05:38:28.183359 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 27 05:38:28.184417 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 27 05:38:28.184864 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 27 05:38:28.185215 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 27 05:38:28.185539 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 27 05:38:28.185564 systemd[1]: Reached target paths.target - Path Units. Jan 27 05:38:28.185880 systemd[1]: Reached target timers.target - Timer Units. Jan 27 05:38:28.189746 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 27 05:38:28.191687 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 27 05:38:28.195009 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 27 05:38:28.196521 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 27 05:38:28.197106 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 27 05:38:28.208174 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 27 05:38:28.209810 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 27 05:38:28.212702 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 27 05:38:28.215590 systemd[1]: Reached target sockets.target - Socket Units. Jan 27 05:38:28.217077 systemd[1]: Reached target basic.target - Basic System. Jan 27 05:38:28.218359 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 27 05:38:28.218537 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 27 05:38:28.221025 systemd[1]: Starting chronyd.service - NTP client/server... Jan 27 05:38:28.224147 systemd[1]: Starting containerd.service - containerd container runtime... Jan 27 05:38:28.231266 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 27 05:38:28.234209 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 27 05:38:28.241159 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 27 05:38:28.242775 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 27 05:38:28.250282 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 27 05:38:28.251054 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:28.253347 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 27 05:38:28.259273 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 27 05:38:28.262812 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 27 05:38:28.265707 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 27 05:38:28.268214 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 27 05:38:28.273864 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 27 05:38:28.277850 jq[1646]: false Jan 27 05:38:28.284112 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 27 05:38:28.284543 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 27 05:38:28.288241 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 27 05:38:28.292576 oslogin_cache_refresh[1650]: Refreshing passwd entry cache Jan 27 05:38:28.293381 google_oslogin_nss_cache[1650]: oslogin_cache_refresh[1650]: Refreshing passwd entry cache Jan 27 05:38:28.291867 systemd[1]: Starting update-engine.service - Update Engine... Jan 27 05:38:28.302152 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 27 05:38:28.307079 chronyd[1641]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 27 05:38:28.307792 chronyd[1641]: Loaded seccomp filter (level 2) Jan 27 05:38:28.308327 systemd[1]: Started chronyd.service - NTP client/server. Jan 27 05:38:28.308405 oslogin_cache_refresh[1650]: Failure getting users, quitting Jan 27 05:38:28.308987 google_oslogin_nss_cache[1650]: oslogin_cache_refresh[1650]: Failure getting users, quitting Jan 27 05:38:28.308987 google_oslogin_nss_cache[1650]: oslogin_cache_refresh[1650]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 27 05:38:28.308987 google_oslogin_nss_cache[1650]: oslogin_cache_refresh[1650]: Refreshing group entry cache Jan 27 05:38:28.308421 oslogin_cache_refresh[1650]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 27 05:38:28.308459 oslogin_cache_refresh[1650]: Refreshing group entry cache Jan 27 05:38:28.311640 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 27 05:38:28.312264 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 27 05:38:28.312448 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 27 05:38:28.314334 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 27 05:38:28.315066 google_oslogin_nss_cache[1650]: oslogin_cache_refresh[1650]: Failure getting groups, quitting Jan 27 05:38:28.315066 google_oslogin_nss_cache[1650]: oslogin_cache_refresh[1650]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 27 05:38:28.315027 oslogin_cache_refresh[1650]: Failure getting groups, quitting Jan 27 05:38:28.315052 oslogin_cache_refresh[1650]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 27 05:38:28.315632 extend-filesystems[1647]: Found /dev/vda6 Jan 27 05:38:28.318792 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 27 05:38:28.319457 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 27 05:38:28.319644 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 27 05:38:28.328089 extend-filesystems[1647]: Found /dev/vda9 Jan 27 05:38:28.338102 extend-filesystems[1647]: Checking size of /dev/vda9 Jan 27 05:38:28.344837 jq[1660]: true Jan 27 05:38:28.367327 tar[1666]: linux-amd64/LICENSE Jan 27 05:38:28.370313 tar[1666]: linux-amd64/helm Jan 27 05:38:28.380285 update_engine[1659]: I20260127 05:38:28.379374 1659 main.cc:92] Flatcar Update Engine starting Jan 27 05:38:28.387576 systemd[1]: motdgen.service: Deactivated successfully. Jan 27 05:38:28.389877 jq[1687]: true Jan 27 05:38:28.387825 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 27 05:38:28.395928 extend-filesystems[1647]: Resized partition /dev/vda9 Jan 27 05:38:28.407625 extend-filesystems[1698]: resize2fs 1.47.3 (8-Jul-2025) Jan 27 05:38:28.421436 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Jan 27 05:38:28.434867 dbus-daemon[1644]: [system] SELinux support is enabled Jan 27 05:38:28.435112 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 27 05:38:28.441048 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 27 05:38:28.441663 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 27 05:38:28.444166 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 27 05:38:28.444187 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 27 05:38:28.460675 systemd[1]: Started update-engine.service - Update Engine. Jan 27 05:38:28.464445 update_engine[1659]: I20260127 05:38:28.462244 1659 update_check_scheduler.cc:74] Next update check in 3m26s Jan 27 05:38:28.471731 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 27 05:38:28.540847 systemd-logind[1655]: New seat seat0. Jan 27 05:38:28.627139 containerd[1681]: time="2026-01-27T05:38:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 27 05:38:28.628184 systemd-logind[1655]: Watching system buttons on /dev/input/event3 (Power Button) Jan 27 05:38:28.628203 systemd-logind[1655]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 27 05:38:28.628378 containerd[1681]: time="2026-01-27T05:38:28.628311243Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 27 05:38:28.631258 systemd[1]: Started systemd-logind.service - User Login Management. Jan 27 05:38:28.635658 locksmithd[1703]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 27 05:38:28.651476 containerd[1681]: time="2026-01-27T05:38:28.639641757Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.775µs" Jan 27 05:38:28.651476 containerd[1681]: time="2026-01-27T05:38:28.639673095Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 27 05:38:28.651476 containerd[1681]: time="2026-01-27T05:38:28.639710483Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 27 05:38:28.651476 containerd[1681]: time="2026-01-27T05:38:28.639722106Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 27 05:38:28.652757 containerd[1681]: time="2026-01-27T05:38:28.652408495Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 27 05:38:28.652757 containerd[1681]: time="2026-01-27T05:38:28.652448519Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 27 05:38:28.652757 containerd[1681]: time="2026-01-27T05:38:28.652497611Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 27 05:38:28.652757 containerd[1681]: time="2026-01-27T05:38:28.652508503Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 27 05:38:28.652757 containerd[1681]: time="2026-01-27T05:38:28.652691982Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 27 05:38:28.652757 containerd[1681]: time="2026-01-27T05:38:28.652703090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 27 05:38:28.652757 containerd[1681]: time="2026-01-27T05:38:28.652712842Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 27 05:38:28.652757 containerd[1681]: time="2026-01-27T05:38:28.652720204Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 27 05:38:28.654422 containerd[1681]: time="2026-01-27T05:38:28.653261078Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 27 05:38:28.654422 containerd[1681]: time="2026-01-27T05:38:28.653288745Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 27 05:38:28.654422 containerd[1681]: time="2026-01-27T05:38:28.653361752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 27 05:38:28.654422 containerd[1681]: time="2026-01-27T05:38:28.653527666Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 27 05:38:28.654422 containerd[1681]: time="2026-01-27T05:38:28.653555696Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 27 05:38:28.654422 containerd[1681]: time="2026-01-27T05:38:28.653565591Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 27 05:38:28.654422 containerd[1681]: time="2026-01-27T05:38:28.653614914Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 27 05:38:28.654422 containerd[1681]: time="2026-01-27T05:38:28.654145171Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 27 05:38:28.657216 containerd[1681]: time="2026-01-27T05:38:28.657172377Z" level=info msg="metadata content store policy set" policy=shared Jan 27 05:38:28.739958 bash[1719]: Updated "/home/core/.ssh/authorized_keys" Jan 27 05:38:28.742827 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 27 05:38:28.748165 systemd[1]: Starting sshkeys.service... Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763340334Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763415106Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763487194Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763499613Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763511832Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763521606Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763533601Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763542253Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763552096Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763561450Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763572696Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763582236Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763599481Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 27 05:38:28.764048 containerd[1681]: time="2026-01-27T05:38:28.763616076Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.763719523Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.763752353Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.763769951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.763779401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.763788636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.763798317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.764252309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.764267237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.764278208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.764519615Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.764532443Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.764555622Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.764594960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.764606118Z" level=info msg="Start snapshots syncer" Jan 27 05:38:28.765527 containerd[1681]: time="2026-01-27T05:38:28.764639208Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 27 05:38:28.767934 containerd[1681]: time="2026-01-27T05:38:28.764882174Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 27 05:38:28.767934 containerd[1681]: time="2026-01-27T05:38:28.765240524Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 27 05:38:28.767295 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.765648780Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.765776740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.765795446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.765806183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.765814656Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.765825187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.765834366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.765843840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.765853224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.765864327Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.766169358Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.766186259Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 27 05:38:28.771390 containerd[1681]: time="2026-01-27T05:38:28.766195447Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 27 05:38:28.771636 containerd[1681]: time="2026-01-27T05:38:28.766204517Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 27 05:38:28.771636 containerd[1681]: time="2026-01-27T05:38:28.766258140Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 27 05:38:28.771636 containerd[1681]: time="2026-01-27T05:38:28.766269263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 27 05:38:28.771636 containerd[1681]: time="2026-01-27T05:38:28.766279217Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 27 05:38:28.771636 containerd[1681]: time="2026-01-27T05:38:28.766835058Z" level=info msg="runtime interface created" Jan 27 05:38:28.771636 containerd[1681]: time="2026-01-27T05:38:28.766841225Z" level=info msg="created NRI interface" Jan 27 05:38:28.771636 containerd[1681]: time="2026-01-27T05:38:28.766851843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 27 05:38:28.771636 containerd[1681]: time="2026-01-27T05:38:28.766866779Z" level=info msg="Connect containerd service" Jan 27 05:38:28.771636 containerd[1681]: time="2026-01-27T05:38:28.766901238Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 27 05:38:28.771636 containerd[1681]: time="2026-01-27T05:38:28.768645218Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 27 05:38:28.772687 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 27 05:38:28.795048 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:28.930887 sshd_keygen[1665]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 27 05:38:28.953949 containerd[1681]: time="2026-01-27T05:38:28.953906670Z" level=info msg="Start subscribing containerd event" Jan 27 05:38:28.954077 containerd[1681]: time="2026-01-27T05:38:28.953949812Z" level=info msg="Start recovering state" Jan 27 05:38:28.954461 containerd[1681]: time="2026-01-27T05:38:28.954443564Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 27 05:38:28.954539 containerd[1681]: time="2026-01-27T05:38:28.954490069Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 27 05:38:28.955003 containerd[1681]: time="2026-01-27T05:38:28.954986660Z" level=info msg="Start event monitor" Jan 27 05:38:28.955060 containerd[1681]: time="2026-01-27T05:38:28.955005334Z" level=info msg="Start cni network conf syncer for default" Jan 27 05:38:28.955060 containerd[1681]: time="2026-01-27T05:38:28.955011729Z" level=info msg="Start streaming server" Jan 27 05:38:28.955060 containerd[1681]: time="2026-01-27T05:38:28.955020443Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 27 05:38:28.955060 containerd[1681]: time="2026-01-27T05:38:28.955028230Z" level=info msg="runtime interface starting up..." Jan 27 05:38:28.955060 containerd[1681]: time="2026-01-27T05:38:28.955044636Z" level=info msg="starting plugins..." Jan 27 05:38:28.955060 containerd[1681]: time="2026-01-27T05:38:28.955056922Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 27 05:38:28.955245 containerd[1681]: time="2026-01-27T05:38:28.955158500Z" level=info msg="containerd successfully booted in 0.341240s" Jan 27 05:38:28.955383 systemd[1]: Started containerd.service - containerd container runtime. Jan 27 05:38:28.965581 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 27 05:38:28.968112 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Jan 27 05:38:28.969839 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 27 05:38:28.985675 systemd[1]: issuegen.service: Deactivated successfully. Jan 27 05:38:28.985902 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 27 05:38:28.987863 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 27 05:38:29.010110 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 27 05:38:29.014103 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 27 05:38:29.020890 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 27 05:38:29.022542 systemd[1]: Reached target getty.target - Login Prompts. Jan 27 05:38:29.052947 extend-filesystems[1698]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 27 05:38:29.052947 extend-filesystems[1698]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 27 05:38:29.052947 extend-filesystems[1698]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Jan 27 05:38:29.054756 extend-filesystems[1647]: Resized filesystem in /dev/vda9 Jan 27 05:38:29.055194 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 27 05:38:29.055817 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 27 05:38:29.073153 systemd-networkd[1489]: eth0: Gained IPv6LL Jan 27 05:38:29.079565 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 27 05:38:29.081311 systemd[1]: Reached target network-online.target - Network is Online. Jan 27 05:38:29.084315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:38:29.088190 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 27 05:38:29.132920 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 27 05:38:29.182159 tar[1666]: linux-amd64/README.md Jan 27 05:38:29.197819 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 27 05:38:29.273254 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:29.839097 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:30.285893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:38:30.296374 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 05:38:31.028742 kubelet[1783]: E0127 05:38:31.028654 1783 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 05:38:31.031055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 05:38:31.031192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 05:38:31.031751 systemd[1]: kubelet.service: Consumed 1.013s CPU time, 266.2M memory peak. Jan 27 05:38:31.288094 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:31.851094 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:35.298097 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:35.310555 coreos-metadata[1643]: Jan 27 05:38:35.310 WARN failed to locate config-drive, using the metadata service API instead Jan 27 05:38:35.344810 coreos-metadata[1643]: Jan 27 05:38:35.344 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 27 05:38:35.873234 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:38:35.883519 coreos-metadata[1731]: Jan 27 05:38:35.883 WARN failed to locate config-drive, using the metadata service API instead Jan 27 05:38:35.895942 coreos-metadata[1731]: Jan 27 05:38:35.895 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 27 05:38:37.096222 coreos-metadata[1731]: Jan 27 05:38:37.096 INFO Fetch successful Jan 27 05:38:37.096222 coreos-metadata[1731]: Jan 27 05:38:37.096 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 27 05:38:37.097998 coreos-metadata[1643]: Jan 27 05:38:37.097 INFO Fetch successful Jan 27 05:38:37.097998 coreos-metadata[1643]: Jan 27 05:38:37.097 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 27 05:38:38.780926 coreos-metadata[1731]: Jan 27 05:38:38.780 INFO Fetch successful Jan 27 05:38:38.782715 coreos-metadata[1643]: Jan 27 05:38:38.782 INFO Fetch successful Jan 27 05:38:38.782715 coreos-metadata[1643]: Jan 27 05:38:38.782 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 27 05:38:38.783952 unknown[1731]: wrote ssh authorized keys file for user: core Jan 27 05:38:38.820062 update-ssh-keys[1801]: Updated "/home/core/.ssh/authorized_keys" Jan 27 05:38:38.822100 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 27 05:38:38.824304 systemd[1]: Finished sshkeys.service. Jan 27 05:38:39.383999 coreos-metadata[1643]: Jan 27 05:38:39.383 INFO Fetch successful Jan 27 05:38:39.383999 coreos-metadata[1643]: Jan 27 05:38:39.383 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 27 05:38:40.004662 coreos-metadata[1643]: Jan 27 05:38:40.004 INFO Fetch successful Jan 27 05:38:40.004662 coreos-metadata[1643]: Jan 27 05:38:40.004 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 27 05:38:40.672094 coreos-metadata[1643]: Jan 27 05:38:40.672 INFO Fetch successful Jan 27 05:38:40.672094 coreos-metadata[1643]: Jan 27 05:38:40.672 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 27 05:38:41.221492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 27 05:38:41.223054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:38:41.353453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:38:41.364395 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 05:38:41.371092 coreos-metadata[1643]: Jan 27 05:38:41.371 INFO Fetch successful Jan 27 05:38:41.407928 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 27 05:38:41.408529 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 27 05:38:41.408662 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 27 05:38:41.411104 systemd[1]: Startup finished in 3.530s (kernel) + 14.914s (initrd) + 16.836s (userspace) = 35.281s. Jan 27 05:38:41.860102 kubelet[1812]: E0127 05:38:41.860028 1812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 05:38:41.863984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 05:38:41.864182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 05:38:41.864592 systemd[1]: kubelet.service: Consumed 167ms CPU time, 110.2M memory peak. Jan 27 05:38:51.971400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 27 05:38:51.973307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:38:52.092593 chronyd[1641]: Selected source PHC0 Jan 27 05:38:52.295405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:38:52.308529 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 05:38:52.453071 kubelet[1832]: E0127 05:38:52.453011 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 05:38:52.456712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 05:38:52.456932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 05:38:52.457906 systemd[1]: kubelet.service: Consumed 184ms CPU time, 110.3M memory peak. Jan 27 05:38:53.411426 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 27 05:38:53.413887 systemd[1]: Started sshd@0-10.0.7.41:22-4.153.228.146:55986.service - OpenSSH per-connection server daemon (4.153.228.146:55986). Jan 27 05:38:54.072915 sshd[1840]: Accepted publickey for core from 4.153.228.146 port 55986 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:38:54.080451 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:38:54.095441 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 27 05:38:54.097835 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 27 05:38:54.102807 systemd-logind[1655]: New session 1 of user core. Jan 27 05:38:54.124258 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 27 05:38:54.127207 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 27 05:38:54.143565 (systemd)[1846]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:38:54.146639 systemd-logind[1655]: New session 2 of user core. Jan 27 05:38:54.296585 systemd[1846]: Queued start job for default target default.target. Jan 27 05:38:54.303264 systemd[1846]: Created slice app.slice - User Application Slice. Jan 27 05:38:54.303302 systemd[1846]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 27 05:38:54.303317 systemd[1846]: Reached target paths.target - Paths. Jan 27 05:38:54.303375 systemd[1846]: Reached target timers.target - Timers. Jan 27 05:38:54.304657 systemd[1846]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 27 05:38:54.305342 systemd[1846]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 27 05:38:54.318841 systemd[1846]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 27 05:38:54.318944 systemd[1846]: Reached target sockets.target - Sockets. Jan 27 05:38:54.321374 systemd[1846]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 27 05:38:54.321501 systemd[1846]: Reached target basic.target - Basic System. Jan 27 05:38:54.321563 systemd[1846]: Reached target default.target - Main User Target. Jan 27 05:38:54.321595 systemd[1846]: Startup finished in 169ms. Jan 27 05:38:54.321849 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 27 05:38:54.330727 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 27 05:38:54.676381 systemd[1]: Started sshd@1-10.0.7.41:22-4.153.228.146:34076.service - OpenSSH per-connection server daemon (4.153.228.146:34076). Jan 27 05:38:55.263157 sshd[1860]: Accepted publickey for core from 4.153.228.146 port 34076 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:38:55.265206 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:38:55.273195 systemd-logind[1655]: New session 3 of user core. Jan 27 05:38:55.279373 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 27 05:38:55.591929 sshd[1864]: Connection closed by 4.153.228.146 port 34076 Jan 27 05:38:55.593806 sshd-session[1860]: pam_unix(sshd:session): session closed for user core Jan 27 05:38:55.605229 systemd[1]: sshd@1-10.0.7.41:22-4.153.228.146:34076.service: Deactivated successfully. Jan 27 05:38:55.610595 systemd[1]: session-3.scope: Deactivated successfully. Jan 27 05:38:55.613397 systemd-logind[1655]: Session 3 logged out. Waiting for processes to exit. Jan 27 05:38:55.618908 systemd-logind[1655]: Removed session 3. Jan 27 05:38:55.715536 systemd[1]: Started sshd@2-10.0.7.41:22-4.153.228.146:34078.service - OpenSSH per-connection server daemon (4.153.228.146:34078). Jan 27 05:38:56.315975 sshd[1870]: Accepted publickey for core from 4.153.228.146 port 34078 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:38:56.316984 sshd-session[1870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:38:56.326471 systemd-logind[1655]: New session 4 of user core. Jan 27 05:38:56.345362 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 27 05:38:56.640113 sshd[1874]: Connection closed by 4.153.228.146 port 34078 Jan 27 05:38:56.640211 sshd-session[1870]: pam_unix(sshd:session): session closed for user core Jan 27 05:38:56.647964 systemd[1]: sshd@2-10.0.7.41:22-4.153.228.146:34078.service: Deactivated successfully. Jan 27 05:38:56.650359 systemd[1]: session-4.scope: Deactivated successfully. Jan 27 05:38:56.652258 systemd-logind[1655]: Session 4 logged out. Waiting for processes to exit. Jan 27 05:38:56.653581 systemd-logind[1655]: Removed session 4. Jan 27 05:38:56.774415 systemd[1]: Started sshd@3-10.0.7.41:22-4.153.228.146:34084.service - OpenSSH per-connection server daemon (4.153.228.146:34084). Jan 27 05:38:57.375875 sshd[1880]: Accepted publickey for core from 4.153.228.146 port 34084 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:38:57.376479 sshd-session[1880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:38:57.381125 systemd-logind[1655]: New session 5 of user core. Jan 27 05:38:57.388401 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 27 05:38:57.688409 sshd[1884]: Connection closed by 4.153.228.146 port 34084 Jan 27 05:38:57.690480 sshd-session[1880]: pam_unix(sshd:session): session closed for user core Jan 27 05:38:57.700770 systemd[1]: sshd@3-10.0.7.41:22-4.153.228.146:34084.service: Deactivated successfully. Jan 27 05:38:57.705501 systemd[1]: session-5.scope: Deactivated successfully. Jan 27 05:38:57.708700 systemd-logind[1655]: Session 5 logged out. Waiting for processes to exit. Jan 27 05:38:57.711877 systemd-logind[1655]: Removed session 5. Jan 27 05:38:57.798244 systemd[1]: Started sshd@4-10.0.7.41:22-4.153.228.146:34096.service - OpenSSH per-connection server daemon (4.153.228.146:34096). Jan 27 05:38:58.318676 sshd[1890]: Accepted publickey for core from 4.153.228.146 port 34096 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:38:58.320187 sshd-session[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:38:58.325577 systemd-logind[1655]: New session 6 of user core. Jan 27 05:38:58.332284 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 27 05:38:58.547191 sudo[1895]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 27 05:38:58.547564 sudo[1895]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 05:38:58.560236 sudo[1895]: pam_unix(sudo:session): session closed for user root Jan 27 05:38:58.655781 sshd[1894]: Connection closed by 4.153.228.146 port 34096 Jan 27 05:38:58.656971 sshd-session[1890]: pam_unix(sshd:session): session closed for user core Jan 27 05:38:58.662471 systemd-logind[1655]: Session 6 logged out. Waiting for processes to exit. Jan 27 05:38:58.662604 systemd[1]: sshd@4-10.0.7.41:22-4.153.228.146:34096.service: Deactivated successfully. Jan 27 05:38:58.664644 systemd[1]: session-6.scope: Deactivated successfully. Jan 27 05:38:58.666976 systemd-logind[1655]: Removed session 6. Jan 27 05:38:58.770948 systemd[1]: Started sshd@5-10.0.7.41:22-4.153.228.146:34104.service - OpenSSH per-connection server daemon (4.153.228.146:34104). Jan 27 05:38:59.321818 sshd[1902]: Accepted publickey for core from 4.153.228.146 port 34104 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:38:59.324093 sshd-session[1902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:38:59.328954 systemd-logind[1655]: New session 7 of user core. Jan 27 05:38:59.337500 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 27 05:38:59.536991 sudo[1908]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 27 05:38:59.537468 sudo[1908]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 05:38:59.542430 sudo[1908]: pam_unix(sudo:session): session closed for user root Jan 27 05:38:59.552004 sudo[1907]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 27 05:38:59.552537 sudo[1907]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 05:38:59.563652 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 27 05:38:59.608000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 27 05:38:59.610380 kernel: kauditd_printk_skb: 77 callbacks suppressed Jan 27 05:38:59.610457 kernel: audit: type=1305 audit(1769492339.608:236): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 27 05:38:59.608000 audit[1932]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe80d4be40 a2=420 a3=0 items=0 ppid=1913 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:38:59.612592 augenrules[1932]: No rules Jan 27 05:38:59.617004 kernel: audit: type=1300 audit(1769492339.608:236): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe80d4be40 a2=420 a3=0 items=0 ppid=1913 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:38:59.617868 systemd[1]: audit-rules.service: Deactivated successfully. Jan 27 05:38:59.620076 kernel: audit: type=1327 audit(1769492339.608:236): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 05:38:59.608000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 05:38:59.618543 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 27 05:38:59.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:59.622256 sudo[1907]: pam_unix(sudo:session): session closed for user root Jan 27 05:38:59.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:59.626735 kernel: audit: type=1130 audit(1769492339.617:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:59.626773 kernel: audit: type=1131 audit(1769492339.617:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:59.621000 audit[1907]: USER_END pid=1907 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:38:59.630264 kernel: audit: type=1106 audit(1769492339.621:239): pid=1907 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:38:59.631066 kernel: audit: type=1104 audit(1769492339.621:240): pid=1907 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:38:59.621000 audit[1907]: CRED_DISP pid=1907 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:38:59.730123 sshd[1906]: Connection closed by 4.153.228.146 port 34104 Jan 27 05:38:59.731580 sshd-session[1902]: pam_unix(sshd:session): session closed for user core Jan 27 05:38:59.734000 audit[1902]: USER_END pid=1902 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:38:59.747094 kernel: audit: type=1106 audit(1769492339.734:241): pid=1902 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:38:59.734000 audit[1902]: CRED_DISP pid=1902 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:38:59.748149 systemd[1]: sshd@5-10.0.7.41:22-4.153.228.146:34104.service: Deactivated successfully. Jan 27 05:38:59.751129 kernel: audit: type=1104 audit(1769492339.734:242): pid=1902 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:38:59.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.7.41:22-4.153.228.146:34104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:59.753005 systemd[1]: session-7.scope: Deactivated successfully. Jan 27 05:38:59.755058 kernel: audit: type=1131 audit(1769492339.747:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.7.41:22-4.153.228.146:34104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:59.755378 systemd-logind[1655]: Session 7 logged out. Waiting for processes to exit. Jan 27 05:38:59.760128 systemd-logind[1655]: Removed session 7. Jan 27 05:38:59.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.7.41:22-4.153.228.146:34118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:38:59.845462 systemd[1]: Started sshd@6-10.0.7.41:22-4.153.228.146:34118.service - OpenSSH per-connection server daemon (4.153.228.146:34118). Jan 27 05:39:00.392000 audit[1941]: USER_ACCT pid=1941 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:00.394356 sshd[1941]: Accepted publickey for core from 4.153.228.146 port 34118 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:39:00.395000 audit[1941]: CRED_ACQ pid=1941 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:00.396000 audit[1941]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbd34d480 a2=3 a3=0 items=0 ppid=1 pid=1941 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:00.396000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:39:00.399252 sshd-session[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:39:00.412077 systemd-logind[1655]: New session 8 of user core. Jan 27 05:39:00.424457 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 27 05:39:00.428000 audit[1941]: USER_START pid=1941 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:00.430000 audit[1945]: CRED_ACQ pid=1945 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:00.595000 audit[1946]: USER_ACCT pid=1946 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:00.596583 sudo[1946]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 27 05:39:00.595000 audit[1946]: CRED_REFR pid=1946 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:00.597083 sudo[1946]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 05:39:00.596000 audit[1946]: USER_START pid=1946 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:01.061489 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 27 05:39:01.071474 (dockerd)[1964]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 27 05:39:01.399399 dockerd[1964]: time="2026-01-27T05:39:01.399028277Z" level=info msg="Starting up" Jan 27 05:39:01.403029 dockerd[1964]: time="2026-01-27T05:39:01.402998368Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 27 05:39:01.418774 dockerd[1964]: time="2026-01-27T05:39:01.418727125Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 27 05:39:01.437575 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4122731483-merged.mount: Deactivated successfully. Jan 27 05:39:01.467484 dockerd[1964]: time="2026-01-27T05:39:01.467436204Z" level=info msg="Loading containers: start." Jan 27 05:39:01.479063 kernel: Initializing XFRM netlink socket Jan 27 05:39:01.551000 audit[2012]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.551000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff41060ee0 a2=0 a3=0 items=0 ppid=1964 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.551000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 27 05:39:01.554000 audit[2014]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.554000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd82b240f0 a2=0 a3=0 items=0 ppid=1964 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.554000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 27 05:39:01.555000 audit[2016]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.555000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd95b34f20 a2=0 a3=0 items=0 ppid=1964 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.555000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 27 05:39:01.557000 audit[2018]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.557000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff057bd470 a2=0 a3=0 items=0 ppid=1964 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.557000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 27 05:39:01.559000 audit[2020]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.559000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd73ab3be0 a2=0 a3=0 items=0 ppid=1964 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.559000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 27 05:39:01.561000 audit[2022]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.561000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff4a871760 a2=0 a3=0 items=0 ppid=1964 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.561000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 05:39:01.563000 audit[2024]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.563000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd06d36bf0 a2=0 a3=0 items=0 ppid=1964 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.563000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 05:39:01.565000 audit[2026]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.565000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcc71bd110 a2=0 a3=0 items=0 ppid=1964 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 27 05:39:01.603000 audit[2029]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.603000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd6ab8d340 a2=0 a3=0 items=0 ppid=1964 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 27 05:39:01.604000 audit[2031]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.604000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcb5ce6590 a2=0 a3=0 items=0 ppid=1964 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.604000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 27 05:39:01.606000 audit[2033]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.606000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffd263a000 a2=0 a3=0 items=0 ppid=1964 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.606000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 27 05:39:01.608000 audit[2035]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.608000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff29ebb720 a2=0 a3=0 items=0 ppid=1964 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.608000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 05:39:01.610000 audit[2037]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.610000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffff0870780 a2=0 a3=0 items=0 ppid=1964 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.610000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 27 05:39:01.647000 audit[2067]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.647000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe57f5b5f0 a2=0 a3=0 items=0 ppid=1964 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.647000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 27 05:39:01.648000 audit[2069]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.648000 audit[2069]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe6a684320 a2=0 a3=0 items=0 ppid=1964 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.648000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 27 05:39:01.650000 audit[2071]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.650000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd5ad1070 a2=0 a3=0 items=0 ppid=1964 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.650000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 27 05:39:01.652000 audit[2073]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.652000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2b123520 a2=0 a3=0 items=0 ppid=1964 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.652000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 27 05:39:01.654000 audit[2075]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.654000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd84820690 a2=0 a3=0 items=0 ppid=1964 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.654000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 27 05:39:01.655000 audit[2077]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.655000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd9b5b7030 a2=0 a3=0 items=0 ppid=1964 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.655000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 05:39:01.657000 audit[2079]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.657000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffde1a66000 a2=0 a3=0 items=0 ppid=1964 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 05:39:01.659000 audit[2081]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.659000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe123f3280 a2=0 a3=0 items=0 ppid=1964 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.659000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 27 05:39:01.661000 audit[2083]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.661000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd2e61a730 a2=0 a3=0 items=0 ppid=1964 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.661000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 27 05:39:01.663000 audit[2085]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.663000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd2675bb40 a2=0 a3=0 items=0 ppid=1964 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 27 05:39:01.665000 audit[2087]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.665000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe25f00c70 a2=0 a3=0 items=0 ppid=1964 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.665000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 27 05:39:01.667000 audit[2089]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.667000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd3c3e43b0 a2=0 a3=0 items=0 ppid=1964 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.667000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 05:39:01.668000 audit[2091]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.668000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffd0dadaf0 a2=0 a3=0 items=0 ppid=1964 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.668000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 27 05:39:01.673000 audit[2096]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.673000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffca490a1c0 a2=0 a3=0 items=0 ppid=1964 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.673000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 27 05:39:01.675000 audit[2098]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.675000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe9d1a0580 a2=0 a3=0 items=0 ppid=1964 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.675000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 27 05:39:01.677000 audit[2100]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.677000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd441afed0 a2=0 a3=0 items=0 ppid=1964 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.677000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 27 05:39:01.679000 audit[2102]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.679000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda2550db0 a2=0 a3=0 items=0 ppid=1964 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.679000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 27 05:39:01.681000 audit[2104]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.681000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdd36de3f0 a2=0 a3=0 items=0 ppid=1964 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 27 05:39:01.683000 audit[2106]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:01.683000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffd9da49f0 a2=0 a3=0 items=0 ppid=1964 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.683000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 27 05:39:01.711000 audit[2111]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.711000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff5aed5a00 a2=0 a3=0 items=0 ppid=1964 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 27 05:39:01.714000 audit[2113]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.714000 audit[2113]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffb0d9e5b0 a2=0 a3=0 items=0 ppid=1964 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.714000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 27 05:39:01.722000 audit[2121]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.722000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffde5df8cc0 a2=0 a3=0 items=0 ppid=1964 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.722000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 27 05:39:01.734000 audit[2127]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.734000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff8a79bbc0 a2=0 a3=0 items=0 ppid=1964 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 27 05:39:01.736000 audit[2129]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.736000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe2df139f0 a2=0 a3=0 items=0 ppid=1964 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 27 05:39:01.738000 audit[2131]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.738000 audit[2131]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdfe599110 a2=0 a3=0 items=0 ppid=1964 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 27 05:39:01.740000 audit[2133]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.740000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffb06a7190 a2=0 a3=0 items=0 ppid=1964 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.740000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 05:39:01.742000 audit[2135]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:01.742000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe9845f660 a2=0 a3=0 items=0 ppid=1964 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:01.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 27 05:39:01.743700 systemd-networkd[1489]: docker0: Link UP Jan 27 05:39:01.751186 dockerd[1964]: time="2026-01-27T05:39:01.751137571Z" level=info msg="Loading containers: done." Jan 27 05:39:01.762732 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3642951789-merged.mount: Deactivated successfully. Jan 27 05:39:01.774989 dockerd[1964]: time="2026-01-27T05:39:01.774944498Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 27 05:39:01.775130 dockerd[1964]: time="2026-01-27T05:39:01.775053318Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 27 05:39:01.775130 dockerd[1964]: time="2026-01-27T05:39:01.775121268Z" level=info msg="Initializing buildkit" Jan 27 05:39:01.800612 dockerd[1964]: time="2026-01-27T05:39:01.800569872Z" level=info msg="Completed buildkit initialization" Jan 27 05:39:01.808871 dockerd[1964]: time="2026-01-27T05:39:01.808827916Z" level=info msg="Daemon has completed initialization" Jan 27 05:39:01.809090 dockerd[1964]: time="2026-01-27T05:39:01.808977472Z" level=info msg="API listen on /run/docker.sock" Jan 27 05:39:01.809342 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 27 05:39:01.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:02.471384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 27 05:39:02.473132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:39:02.616582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:39:02.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:02.630562 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 05:39:02.665767 kubelet[2182]: E0127 05:39:02.665685 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 05:39:02.667831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 05:39:02.667959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 05:39:02.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 05:39:02.668489 systemd[1]: kubelet.service: Consumed 143ms CPU time, 110M memory peak. Jan 27 05:39:03.127602 containerd[1681]: time="2026-01-27T05:39:03.127557859Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 27 05:39:03.827969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3443866474.mount: Deactivated successfully. Jan 27 05:39:04.781060 containerd[1681]: time="2026-01-27T05:39:04.780781179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:04.783242 containerd[1681]: time="2026-01-27T05:39:04.783161637Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 27 05:39:04.784739 containerd[1681]: time="2026-01-27T05:39:04.784457324Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:04.787135 containerd[1681]: time="2026-01-27T05:39:04.787104170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:04.787958 containerd[1681]: time="2026-01-27T05:39:04.787936555Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.660336426s" Jan 27 05:39:04.788046 containerd[1681]: time="2026-01-27T05:39:04.788023411Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 27 05:39:04.788871 containerd[1681]: time="2026-01-27T05:39:04.788856130Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 27 05:39:06.274191 containerd[1681]: time="2026-01-27T05:39:06.274121384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:06.276348 containerd[1681]: time="2026-01-27T05:39:06.276140843Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 27 05:39:06.277706 containerd[1681]: time="2026-01-27T05:39:06.277685497Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:06.280799 containerd[1681]: time="2026-01-27T05:39:06.280762282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:06.281528 containerd[1681]: time="2026-01-27T05:39:06.281507360Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.49257158s" Jan 27 05:39:06.281570 containerd[1681]: time="2026-01-27T05:39:06.281533313Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 27 05:39:06.282195 containerd[1681]: time="2026-01-27T05:39:06.282172147Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 27 05:39:07.631059 containerd[1681]: time="2026-01-27T05:39:07.630487841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:07.633270 containerd[1681]: time="2026-01-27T05:39:07.633248951Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=0" Jan 27 05:39:07.634682 containerd[1681]: time="2026-01-27T05:39:07.634663811Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:07.637384 containerd[1681]: time="2026-01-27T05:39:07.637365463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:07.638680 containerd[1681]: time="2026-01-27T05:39:07.638662471Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.356468794s" Jan 27 05:39:07.638791 containerd[1681]: time="2026-01-27T05:39:07.638753749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 27 05:39:07.639647 containerd[1681]: time="2026-01-27T05:39:07.639499927Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 27 05:39:08.629637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1992981272.mount: Deactivated successfully. Jan 27 05:39:09.679888 containerd[1681]: time="2026-01-27T05:39:09.679819085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:09.681761 containerd[1681]: time="2026-01-27T05:39:09.681691904Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 27 05:39:09.683725 containerd[1681]: time="2026-01-27T05:39:09.683694012Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:09.685066 containerd[1681]: time="2026-01-27T05:39:09.685018480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:09.685501 containerd[1681]: time="2026-01-27T05:39:09.685432993Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.045911241s" Jan 27 05:39:09.685501 containerd[1681]: time="2026-01-27T05:39:09.685461366Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 27 05:39:09.685869 containerd[1681]: time="2026-01-27T05:39:09.685854361Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 27 05:39:10.259847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329035002.mount: Deactivated successfully. Jan 27 05:39:10.939196 containerd[1681]: time="2026-01-27T05:39:10.939123787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:10.941076 containerd[1681]: time="2026-01-27T05:39:10.940866901Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jan 27 05:39:10.942333 containerd[1681]: time="2026-01-27T05:39:10.942311292Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:10.946061 containerd[1681]: time="2026-01-27T05:39:10.946016879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:10.946728 containerd[1681]: time="2026-01-27T05:39:10.946702322Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.26057283s" Jan 27 05:39:10.946771 containerd[1681]: time="2026-01-27T05:39:10.946734742Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 27 05:39:10.947566 containerd[1681]: time="2026-01-27T05:39:10.947549870Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 27 05:39:11.566061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248273398.mount: Deactivated successfully. Jan 27 05:39:11.578073 containerd[1681]: time="2026-01-27T05:39:11.577368984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 05:39:11.578839 containerd[1681]: time="2026-01-27T05:39:11.578681685Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 27 05:39:11.580150 containerd[1681]: time="2026-01-27T05:39:11.580119318Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 05:39:11.582724 containerd[1681]: time="2026-01-27T05:39:11.582690957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 05:39:11.583407 containerd[1681]: time="2026-01-27T05:39:11.583277442Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 635.702585ms" Jan 27 05:39:11.583407 containerd[1681]: time="2026-01-27T05:39:11.583311843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 27 05:39:11.584449 containerd[1681]: time="2026-01-27T05:39:11.584411865Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 27 05:39:12.150061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2400413055.mount: Deactivated successfully. Jan 27 05:39:12.722315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 27 05:39:12.725394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:39:14.082320 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 27 05:39:14.082441 kernel: audit: type=1130 audit(1769492354.077:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:14.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:14.078310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:39:14.089628 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 05:39:14.142805 update_engine[1659]: I20260127 05:39:14.142086 1659 update_attempter.cc:509] Updating boot flags... Jan 27 05:39:14.341268 kubelet[2377]: E0127 05:39:14.340483 2377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 05:39:14.343244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 05:39:14.343374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 05:39:14.343753 systemd[1]: kubelet.service: Consumed 310ms CPU time, 108.6M memory peak. Jan 27 05:39:14.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 05:39:14.347107 kernel: audit: type=1131 audit(1769492354.342:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 05:39:15.407060 containerd[1681]: time="2026-01-27T05:39:15.405802357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:15.411570 containerd[1681]: time="2026-01-27T05:39:15.411538061Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 27 05:39:15.413370 containerd[1681]: time="2026-01-27T05:39:15.413343667Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:15.421531 containerd[1681]: time="2026-01-27T05:39:15.421184740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:15.423670 containerd[1681]: time="2026-01-27T05:39:15.423639419Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.839127169s" Jan 27 05:39:15.423905 containerd[1681]: time="2026-01-27T05:39:15.423671704Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 27 05:39:18.244027 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:39:18.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:18.244658 systemd[1]: kubelet.service: Consumed 310ms CPU time, 108.6M memory peak. Jan 27 05:39:18.249996 kernel: audit: type=1130 audit(1769492358.243:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:18.250084 kernel: audit: type=1131 audit(1769492358.243:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:18.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:18.249289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:39:18.278778 systemd[1]: Reload requested from client PID 2431 ('systemctl') (unit session-8.scope)... Jan 27 05:39:18.278908 systemd[1]: Reloading... Jan 27 05:39:18.369060 zram_generator::config[2474]: No configuration found. Jan 27 05:39:18.566175 systemd[1]: Reloading finished in 286 ms. Jan 27 05:39:18.592065 kernel: audit: type=1334 audit(1769492358.588:300): prog-id=66 op=LOAD Jan 27 05:39:18.588000 audit: BPF prog-id=66 op=LOAD Jan 27 05:39:18.588000 audit: BPF prog-id=58 op=UNLOAD Jan 27 05:39:18.598114 kernel: audit: type=1334 audit(1769492358.588:301): prog-id=58 op=UNLOAD Jan 27 05:39:18.589000 audit: BPF prog-id=67 op=LOAD Jan 27 05:39:18.589000 audit: BPF prog-id=68 op=LOAD Jan 27 05:39:18.601113 kernel: audit: type=1334 audit(1769492358.589:302): prog-id=67 op=LOAD Jan 27 05:39:18.601201 kernel: audit: type=1334 audit(1769492358.589:303): prog-id=68 op=LOAD Jan 27 05:39:18.589000 audit: BPF prog-id=50 op=UNLOAD Jan 27 05:39:18.602277 kernel: audit: type=1334 audit(1769492358.589:304): prog-id=50 op=UNLOAD Jan 27 05:39:18.589000 audit: BPF prog-id=51 op=UNLOAD Jan 27 05:39:18.603329 kernel: audit: type=1334 audit(1769492358.589:305): prog-id=51 op=UNLOAD Jan 27 05:39:18.590000 audit: BPF prog-id=69 op=LOAD Jan 27 05:39:18.590000 audit: BPF prog-id=49 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=70 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=63 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=71 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=72 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=64 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=65 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=73 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=52 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=74 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=75 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=53 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=54 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=76 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=59 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=77 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=78 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=60 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=61 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=79 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=62 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=80 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=46 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=81 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=82 op=LOAD Jan 27 05:39:18.591000 audit: BPF prog-id=47 op=UNLOAD Jan 27 05:39:18.591000 audit: BPF prog-id=48 op=UNLOAD Jan 27 05:39:18.597000 audit: BPF prog-id=83 op=LOAD Jan 27 05:39:18.597000 audit: BPF prog-id=55 op=UNLOAD Jan 27 05:39:18.597000 audit: BPF prog-id=84 op=LOAD Jan 27 05:39:18.597000 audit: BPF prog-id=85 op=LOAD Jan 27 05:39:18.597000 audit: BPF prog-id=56 op=UNLOAD Jan 27 05:39:18.597000 audit: BPF prog-id=57 op=UNLOAD Jan 27 05:39:18.616439 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 27 05:39:18.616509 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 27 05:39:18.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 05:39:18.617045 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:39:18.617092 systemd[1]: kubelet.service: Consumed 98ms CPU time, 98.4M memory peak. Jan 27 05:39:18.619082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:39:19.734785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:39:19.739177 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 27 05:39:19.739288 kernel: audit: type=1130 audit(1769492359.733:341): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:19.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:19.766406 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 27 05:39:19.812883 kubelet[2531]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 05:39:19.812883 kubelet[2531]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 27 05:39:19.812883 kubelet[2531]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 05:39:19.812883 kubelet[2531]: I0127 05:39:19.812642 2531 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 05:39:20.024535 kubelet[2531]: I0127 05:39:20.024437 2531 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 27 05:39:20.024535 kubelet[2531]: I0127 05:39:20.024469 2531 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 05:39:20.024916 kubelet[2531]: I0127 05:39:20.024903 2531 server.go:954] "Client rotation is on, will bootstrap in background" Jan 27 05:39:20.059989 kubelet[2531]: E0127 05:39:20.059542 2531 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.7.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.7.41:6443: connect: connection refused" logger="UnhandledError" Jan 27 05:39:20.065612 kubelet[2531]: I0127 05:39:20.065360 2531 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 27 05:39:20.077488 kubelet[2531]: I0127 05:39:20.077464 2531 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 05:39:20.080825 kubelet[2531]: I0127 05:39:20.080325 2531 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 27 05:39:20.080825 kubelet[2531]: I0127 05:39:20.080565 2531 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 05:39:20.080825 kubelet[2531]: I0127 05:39:20.080586 2531 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4592-0-0-n-eb4c5d05b1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 05:39:20.081790 kubelet[2531]: I0127 05:39:20.081777 2531 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 05:39:20.081848 kubelet[2531]: I0127 05:39:20.081843 2531 container_manager_linux.go:304] "Creating device plugin manager" Jan 27 05:39:20.082009 kubelet[2531]: I0127 05:39:20.082001 2531 state_mem.go:36] "Initialized new in-memory state store" Jan 27 05:39:20.087709 kubelet[2531]: I0127 05:39:20.087683 2531 kubelet.go:446] "Attempting to sync node with API server" Jan 27 05:39:20.087838 kubelet[2531]: I0127 05:39:20.087830 2531 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 05:39:20.087893 kubelet[2531]: I0127 05:39:20.087888 2531 kubelet.go:352] "Adding apiserver pod source" Jan 27 05:39:20.087931 kubelet[2531]: I0127 05:39:20.087927 2531 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 05:39:20.093646 kubelet[2531]: I0127 05:39:20.093623 2531 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 27 05:39:20.094081 kubelet[2531]: I0127 05:39:20.094070 2531 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 05:39:20.094136 kubelet[2531]: W0127 05:39:20.094127 2531 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 27 05:39:20.096435 kubelet[2531]: I0127 05:39:20.096398 2531 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 27 05:39:20.096435 kubelet[2531]: I0127 05:39:20.096438 2531 server.go:1287] "Started kubelet" Jan 27 05:39:20.097240 kubelet[2531]: W0127 05:39:20.096604 2531 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.7.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4592-0-0-n-eb4c5d05b1&limit=500&resourceVersion=0": dial tcp 10.0.7.41:6443: connect: connection refused Jan 27 05:39:20.097240 kubelet[2531]: E0127 05:39:20.096670 2531 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.7.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4592-0-0-n-eb4c5d05b1&limit=500&resourceVersion=0\": dial tcp 10.0.7.41:6443: connect: connection refused" logger="UnhandledError" Jan 27 05:39:20.107007 kubelet[2531]: I0127 05:39:20.106943 2531 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 05:39:20.107347 kubelet[2531]: I0127 05:39:20.107312 2531 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 05:39:20.108777 kubelet[2531]: I0127 05:39:20.108631 2531 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 05:39:20.110000 audit[2542]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:20.114565 kernel: audit: type=1325 audit(1769492360.110:342): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:20.114638 kernel: audit: type=1300 audit(1769492360.110:342): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffef2a93870 a2=0 a3=0 items=0 ppid=2531 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.110000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffef2a93870 a2=0 a3=0 items=0 ppid=2531 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.115662 kubelet[2531]: W0127 05:39:20.115028 2531 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.7.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.7.41:6443: connect: connection refused Jan 27 05:39:20.115662 kubelet[2531]: E0127 05:39:20.115096 2531 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.7.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.7.41:6443: connect: connection refused" logger="UnhandledError" Jan 27 05:39:20.117481 kubelet[2531]: E0127 05:39:20.115558 2531 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.7.41:6443/api/v1/namespaces/default/events\": dial tcp 10.0.7.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4592-0-0-n-eb4c5d05b1.188e7fed24508221 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4592-0-0-n-eb4c5d05b1,UID:ci-4592-0-0-n-eb4c5d05b1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4592-0-0-n-eb4c5d05b1,},FirstTimestamp:2026-01-27 05:39:20.096416289 +0000 UTC m=+0.326716976,LastTimestamp:2026-01-27 05:39:20.096416289 +0000 UTC m=+0.326716976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4592-0-0-n-eb4c5d05b1,}" Jan 27 05:39:20.118418 kubelet[2531]: E0127 05:39:20.118403 2531 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 27 05:39:20.119062 kubelet[2531]: I0127 05:39:20.118497 2531 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 05:39:20.110000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 27 05:39:20.121196 kubelet[2531]: I0127 05:39:20.121183 2531 server.go:479] "Adding debug handlers to kubelet server" Jan 27 05:39:20.114000 audit[2543]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:20.122116 kubelet[2531]: I0127 05:39:20.118593 2531 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 27 05:39:20.122155 kernel: audit: type=1327 audit(1769492360.110:342): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 27 05:39:20.122192 kernel: audit: type=1325 audit(1769492360.114:343): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:20.114000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce90ab6d0 a2=0 a3=0 items=0 ppid=2531 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.123549 kubelet[2531]: I0127 05:39:20.123172 2531 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 27 05:39:20.123813 kubelet[2531]: I0127 05:39:20.123182 2531 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 27 05:39:20.123858 kubelet[2531]: E0127 05:39:20.123314 2531 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4592-0-0-n-eb4c5d05b1\" not found" Jan 27 05:39:20.123926 kubelet[2531]: I0127 05:39:20.123920 2531 reconciler.go:26] "Reconciler: start to sync state" Jan 27 05:39:20.124771 kernel: audit: type=1300 audit(1769492360.114:343): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce90ab6d0 a2=0 a3=0 items=0 ppid=2531 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.125510 kubelet[2531]: W0127 05:39:20.125471 2531 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.7.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.7.41:6443: connect: connection refused Jan 27 05:39:20.125609 kubelet[2531]: E0127 05:39:20.125599 2531 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.7.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.7.41:6443: connect: connection refused" logger="UnhandledError" Jan 27 05:39:20.125747 kubelet[2531]: E0127 05:39:20.125707 2531 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.7.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4592-0-0-n-eb4c5d05b1?timeout=10s\": dial tcp 10.0.7.41:6443: connect: connection refused" interval="200ms" Jan 27 05:39:20.127366 kubelet[2531]: I0127 05:39:20.127354 2531 factory.go:221] Registration of the containerd container factory successfully Jan 27 05:39:20.128060 kubelet[2531]: I0127 05:39:20.127456 2531 factory.go:221] Registration of the systemd container factory successfully Jan 27 05:39:20.128060 kubelet[2531]: I0127 05:39:20.127525 2531 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 27 05:39:20.114000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 27 05:39:20.131045 kernel: audit: type=1327 audit(1769492360.114:343): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 27 05:39:20.127000 audit[2545]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:20.127000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff065af570 a2=0 a3=0 items=0 ppid=2531 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.134866 kernel: audit: type=1325 audit(1769492360.127:344): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:20.134914 kernel: audit: type=1300 audit(1769492360.127:344): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff065af570 a2=0 a3=0 items=0 ppid=2531 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 05:39:20.140065 kernel: audit: type=1327 audit(1769492360.127:344): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 05:39:20.131000 audit[2547]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:20.131000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd903d7300 a2=0 a3=0 items=0 ppid=2531 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.131000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 05:39:20.143000 audit[2551]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:20.143000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc4e9b9d30 a2=0 a3=0 items=0 ppid=2531 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 27 05:39:20.145367 kubelet[2531]: I0127 05:39:20.145305 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 05:39:20.145000 audit[2552]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:20.145000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe51138990 a2=0 a3=0 items=0 ppid=2531 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 27 05:39:20.146555 kubelet[2531]: I0127 05:39:20.146537 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 05:39:20.146595 kubelet[2531]: I0127 05:39:20.146560 2531 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 27 05:39:20.146595 kubelet[2531]: I0127 05:39:20.146585 2531 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 27 05:39:20.146595 kubelet[2531]: I0127 05:39:20.146592 2531 kubelet.go:2382] "Starting kubelet main sync loop" Jan 27 05:39:20.146653 kubelet[2531]: E0127 05:39:20.146641 2531 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 05:39:20.146000 audit[2553]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:20.146000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa887b6f0 a2=0 a3=0 items=0 ppid=2531 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 27 05:39:20.147000 audit[2554]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:20.147000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe3c73400 a2=0 a3=0 items=0 ppid=2531 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.147000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 27 05:39:20.148000 audit[2555]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:20.148000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed8900290 a2=0 a3=0 items=0 ppid=2531 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.148000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 27 05:39:20.149000 audit[2556]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:20.149000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcbb660360 a2=0 a3=0 items=0 ppid=2531 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 27 05:39:20.154000 audit[2561]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2561 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:20.154000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa797dbd0 a2=0 a3=0 items=0 ppid=2531 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.154000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 27 05:39:20.156064 kubelet[2531]: W0127 05:39:20.155932 2531 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.7.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.7.41:6443: connect: connection refused Jan 27 05:39:20.156064 kubelet[2531]: E0127 05:39:20.155988 2531 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.7.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.7.41:6443: connect: connection refused" logger="UnhandledError" Jan 27 05:39:20.156000 audit[2562]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:20.156000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd81169f50 a2=0 a3=0 items=0 ppid=2531 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.156000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 27 05:39:20.159450 kubelet[2531]: I0127 05:39:20.159437 2531 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 27 05:39:20.159605 kubelet[2531]: I0127 05:39:20.159538 2531 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 27 05:39:20.159605 kubelet[2531]: I0127 05:39:20.159555 2531 state_mem.go:36] "Initialized new in-memory state store" Jan 27 05:39:20.162859 kubelet[2531]: I0127 05:39:20.162849 2531 policy_none.go:49] "None policy: Start" Jan 27 05:39:20.162936 kubelet[2531]: I0127 05:39:20.162930 2531 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 27 05:39:20.162973 kubelet[2531]: I0127 05:39:20.162969 2531 state_mem.go:35] "Initializing new in-memory state store" Jan 27 05:39:20.168866 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 27 05:39:20.176431 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 27 05:39:20.179395 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 27 05:39:20.190992 kubelet[2531]: I0127 05:39:20.190931 2531 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 05:39:20.191189 kubelet[2531]: I0127 05:39:20.191128 2531 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 05:39:20.191189 kubelet[2531]: I0127 05:39:20.191141 2531 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 05:39:20.192310 kubelet[2531]: I0127 05:39:20.192298 2531 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 05:39:20.193354 kubelet[2531]: E0127 05:39:20.193334 2531 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 27 05:39:20.193402 kubelet[2531]: E0127 05:39:20.193376 2531 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4592-0-0-n-eb4c5d05b1\" not found" Jan 27 05:39:20.259794 systemd[1]: Created slice kubepods-burstable-pod21a9838d1587ce67e1583dca6b6d8e19.slice - libcontainer container kubepods-burstable-pod21a9838d1587ce67e1583dca6b6d8e19.slice. Jan 27 05:39:20.280028 kubelet[2531]: E0127 05:39:20.278975 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-eb4c5d05b1\" not found" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.282487 systemd[1]: Created slice kubepods-burstable-pod75bcf10f02a58c1e2b6f37ff11a1481d.slice - libcontainer container kubepods-burstable-pod75bcf10f02a58c1e2b6f37ff11a1481d.slice. Jan 27 05:39:20.286285 kubelet[2531]: E0127 05:39:20.286262 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-eb4c5d05b1\" not found" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.287268 systemd[1]: Created slice kubepods-burstable-pod407560494481a0ed4e30f3d50b60939f.slice - libcontainer container kubepods-burstable-pod407560494481a0ed4e30f3d50b60939f.slice. Jan 27 05:39:20.288706 kubelet[2531]: E0127 05:39:20.288689 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-eb4c5d05b1\" not found" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.293208 kubelet[2531]: I0127 05:39:20.293147 2531 kubelet_node_status.go:75] "Attempting to register node" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.293690 kubelet[2531]: E0127 05:39:20.293669 2531 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.7.41:6443/api/v1/nodes\": dial tcp 10.0.7.41:6443: connect: connection refused" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.326446 kubelet[2531]: E0127 05:39:20.326396 2531 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.7.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4592-0-0-n-eb4c5d05b1?timeout=10s\": dial tcp 10.0.7.41:6443: connect: connection refused" interval="400ms" Jan 27 05:39:20.426054 kubelet[2531]: I0127 05:39:20.425794 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/21a9838d1587ce67e1583dca6b6d8e19-kubeconfig\") pod \"kube-scheduler-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"21a9838d1587ce67e1583dca6b6d8e19\") " pod="kube-system/kube-scheduler-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.426054 kubelet[2531]: I0127 05:39:20.425837 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407560494481a0ed4e30f3d50b60939f-k8s-certs\") pod \"kube-apiserver-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"407560494481a0ed4e30f3d50b60939f\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.426054 kubelet[2531]: I0127 05:39:20.425864 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75bcf10f02a58c1e2b6f37ff11a1481d-ca-certs\") pod \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"75bcf10f02a58c1e2b6f37ff11a1481d\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.426054 kubelet[2531]: I0127 05:39:20.425884 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/75bcf10f02a58c1e2b6f37ff11a1481d-flexvolume-dir\") pod \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"75bcf10f02a58c1e2b6f37ff11a1481d\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.426054 kubelet[2531]: I0127 05:39:20.425901 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75bcf10f02a58c1e2b6f37ff11a1481d-kubeconfig\") pod \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"75bcf10f02a58c1e2b6f37ff11a1481d\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.426272 kubelet[2531]: I0127 05:39:20.425916 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407560494481a0ed4e30f3d50b60939f-ca-certs\") pod \"kube-apiserver-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"407560494481a0ed4e30f3d50b60939f\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.426272 kubelet[2531]: I0127 05:39:20.425929 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407560494481a0ed4e30f3d50b60939f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"407560494481a0ed4e30f3d50b60939f\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.426272 kubelet[2531]: I0127 05:39:20.425947 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75bcf10f02a58c1e2b6f37ff11a1481d-k8s-certs\") pod \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"75bcf10f02a58c1e2b6f37ff11a1481d\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.426272 kubelet[2531]: I0127 05:39:20.425965 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75bcf10f02a58c1e2b6f37ff11a1481d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"75bcf10f02a58c1e2b6f37ff11a1481d\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.496294 kubelet[2531]: I0127 05:39:20.496267 2531 kubelet_node_status.go:75] "Attempting to register node" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.496772 kubelet[2531]: E0127 05:39:20.496750 2531 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.7.41:6443/api/v1/nodes\": dial tcp 10.0.7.41:6443: connect: connection refused" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.581240 containerd[1681]: time="2026-01-27T05:39:20.581128777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4592-0-0-n-eb4c5d05b1,Uid:21a9838d1587ce67e1583dca6b6d8e19,Namespace:kube-system,Attempt:0,}" Jan 27 05:39:20.588009 containerd[1681]: time="2026-01-27T05:39:20.587980708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1,Uid:75bcf10f02a58c1e2b6f37ff11a1481d,Namespace:kube-system,Attempt:0,}" Jan 27 05:39:20.590356 containerd[1681]: time="2026-01-27T05:39:20.590326586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4592-0-0-n-eb4c5d05b1,Uid:407560494481a0ed4e30f3d50b60939f,Namespace:kube-system,Attempt:0,}" Jan 27 05:39:20.619349 containerd[1681]: time="2026-01-27T05:39:20.618870157Z" level=info msg="connecting to shim 154e4f6b0139dace890c42981ea89f17a4d7e7e3629e83c44e2716a117464170" address="unix:///run/containerd/s/ee076ce4bfd52c155852d20b848fa20667053fcbe35ab1f5a29543c353e6a34c" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:39:20.635520 containerd[1681]: time="2026-01-27T05:39:20.635480312Z" level=info msg="connecting to shim 724b0828ac89ae2dd32688129197a6677b2b34fbb8212bbc250e20b17cab4324" address="unix:///run/containerd/s/ffa770c716ab08b1817ab9dae90bd2804b8a7ecf562ed629cae86a5bc97a3afc" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:39:20.657335 containerd[1681]: time="2026-01-27T05:39:20.657290867Z" level=info msg="connecting to shim 099bf0370d69b60a2e29c1189d8f05faef7c6c60eefb80265b6bfdc56519eceb" address="unix:///run/containerd/s/4a3ab3362a6a09bfa32d0179eb1c9d71bb4d0651598255d5888ee084602b7c95" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:39:20.669586 systemd[1]: Started cri-containerd-154e4f6b0139dace890c42981ea89f17a4d7e7e3629e83c44e2716a117464170.scope - libcontainer container 154e4f6b0139dace890c42981ea89f17a4d7e7e3629e83c44e2716a117464170. Jan 27 05:39:20.686234 systemd[1]: Started cri-containerd-724b0828ac89ae2dd32688129197a6677b2b34fbb8212bbc250e20b17cab4324.scope - libcontainer container 724b0828ac89ae2dd32688129197a6677b2b34fbb8212bbc250e20b17cab4324. Jan 27 05:39:20.688000 audit: BPF prog-id=86 op=LOAD Jan 27 05:39:20.689000 audit: BPF prog-id=87 op=LOAD Jan 27 05:39:20.689000 audit[2591]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2571 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346534663662303133396461636538393063343239383165613839 Jan 27 05:39:20.690000 audit: BPF prog-id=87 op=UNLOAD Jan 27 05:39:20.690000 audit[2591]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346534663662303133396461636538393063343239383165613839 Jan 27 05:39:20.690000 audit: BPF prog-id=88 op=LOAD Jan 27 05:39:20.690000 audit[2591]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2571 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346534663662303133396461636538393063343239383165613839 Jan 27 05:39:20.690000 audit: BPF prog-id=89 op=LOAD Jan 27 05:39:20.690000 audit[2591]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2571 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346534663662303133396461636538393063343239383165613839 Jan 27 05:39:20.691000 audit: BPF prog-id=89 op=UNLOAD Jan 27 05:39:20.691000 audit[2591]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346534663662303133396461636538393063343239383165613839 Jan 27 05:39:20.691000 audit: BPF prog-id=88 op=UNLOAD Jan 27 05:39:20.691000 audit[2591]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346534663662303133396461636538393063343239383165613839 Jan 27 05:39:20.691000 audit: BPF prog-id=90 op=LOAD Jan 27 05:39:20.691000 audit[2591]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2571 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135346534663662303133396461636538393063343239383165613839 Jan 27 05:39:20.700214 systemd[1]: Started cri-containerd-099bf0370d69b60a2e29c1189d8f05faef7c6c60eefb80265b6bfdc56519eceb.scope - libcontainer container 099bf0370d69b60a2e29c1189d8f05faef7c6c60eefb80265b6bfdc56519eceb. Jan 27 05:39:20.701000 audit: BPF prog-id=91 op=LOAD Jan 27 05:39:20.702000 audit: BPF prog-id=92 op=LOAD Jan 27 05:39:20.702000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2590 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732346230383238616338396165326464333236383831323931393761 Jan 27 05:39:20.703000 audit: BPF prog-id=92 op=UNLOAD Jan 27 05:39:20.703000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2590 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732346230383238616338396165326464333236383831323931393761 Jan 27 05:39:20.703000 audit: BPF prog-id=93 op=LOAD Jan 27 05:39:20.703000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2590 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732346230383238616338396165326464333236383831323931393761 Jan 27 05:39:20.703000 audit: BPF prog-id=94 op=LOAD Jan 27 05:39:20.703000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2590 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732346230383238616338396165326464333236383831323931393761 Jan 27 05:39:20.705000 audit: BPF prog-id=94 op=UNLOAD Jan 27 05:39:20.705000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2590 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732346230383238616338396165326464333236383831323931393761 Jan 27 05:39:20.705000 audit: BPF prog-id=93 op=UNLOAD Jan 27 05:39:20.705000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2590 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732346230383238616338396165326464333236383831323931393761 Jan 27 05:39:20.705000 audit: BPF prog-id=95 op=LOAD Jan 27 05:39:20.705000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2590 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732346230383238616338396165326464333236383831323931393761 Jan 27 05:39:20.719000 audit: BPF prog-id=96 op=LOAD Jan 27 05:39:20.721000 audit: BPF prog-id=97 op=LOAD Jan 27 05:39:20.721000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2616 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039396266303337306436396236306132653239633131383964386630 Jan 27 05:39:20.721000 audit: BPF prog-id=97 op=UNLOAD Jan 27 05:39:20.721000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2616 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039396266303337306436396236306132653239633131383964386630 Jan 27 05:39:20.721000 audit: BPF prog-id=98 op=LOAD Jan 27 05:39:20.721000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2616 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039396266303337306436396236306132653239633131383964386630 Jan 27 05:39:20.721000 audit: BPF prog-id=99 op=LOAD Jan 27 05:39:20.721000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2616 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039396266303337306436396236306132653239633131383964386630 Jan 27 05:39:20.721000 audit: BPF prog-id=99 op=UNLOAD Jan 27 05:39:20.721000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2616 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039396266303337306436396236306132653239633131383964386630 Jan 27 05:39:20.721000 audit: BPF prog-id=98 op=UNLOAD Jan 27 05:39:20.721000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2616 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039396266303337306436396236306132653239633131383964386630 Jan 27 05:39:20.721000 audit: BPF prog-id=100 op=LOAD Jan 27 05:39:20.721000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2616 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039396266303337306436396236306132653239633131383964386630 Jan 27 05:39:20.727050 kubelet[2531]: E0127 05:39:20.726783 2531 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.7.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4592-0-0-n-eb4c5d05b1?timeout=10s\": dial tcp 10.0.7.41:6443: connect: connection refused" interval="800ms" Jan 27 05:39:20.742631 containerd[1681]: time="2026-01-27T05:39:20.742591811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4592-0-0-n-eb4c5d05b1,Uid:21a9838d1587ce67e1583dca6b6d8e19,Namespace:kube-system,Attempt:0,} returns sandbox id \"154e4f6b0139dace890c42981ea89f17a4d7e7e3629e83c44e2716a117464170\"" Jan 27 05:39:20.747069 containerd[1681]: time="2026-01-27T05:39:20.746752833Z" level=info msg="CreateContainer within sandbox \"154e4f6b0139dace890c42981ea89f17a4d7e7e3629e83c44e2716a117464170\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 27 05:39:20.760646 containerd[1681]: time="2026-01-27T05:39:20.760617734Z" level=info msg="Container 534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:39:20.770437 containerd[1681]: time="2026-01-27T05:39:20.770373924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1,Uid:75bcf10f02a58c1e2b6f37ff11a1481d,Namespace:kube-system,Attempt:0,} returns sandbox id \"724b0828ac89ae2dd32688129197a6677b2b34fbb8212bbc250e20b17cab4324\"" Jan 27 05:39:20.774594 containerd[1681]: time="2026-01-27T05:39:20.774481299Z" level=info msg="CreateContainer within sandbox \"154e4f6b0139dace890c42981ea89f17a4d7e7e3629e83c44e2716a117464170\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4\"" Jan 27 05:39:20.777250 containerd[1681]: time="2026-01-27T05:39:20.777229309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4592-0-0-n-eb4c5d05b1,Uid:407560494481a0ed4e30f3d50b60939f,Namespace:kube-system,Attempt:0,} returns sandbox id \"099bf0370d69b60a2e29c1189d8f05faef7c6c60eefb80265b6bfdc56519eceb\"" Jan 27 05:39:20.778096 containerd[1681]: time="2026-01-27T05:39:20.777660418Z" level=info msg="StartContainer for \"534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4\"" Jan 27 05:39:20.778376 containerd[1681]: time="2026-01-27T05:39:20.778355064Z" level=info msg="CreateContainer within sandbox \"724b0828ac89ae2dd32688129197a6677b2b34fbb8212bbc250e20b17cab4324\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 27 05:39:20.780204 containerd[1681]: time="2026-01-27T05:39:20.780182673Z" level=info msg="connecting to shim 534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4" address="unix:///run/containerd/s/ee076ce4bfd52c155852d20b848fa20667053fcbe35ab1f5a29543c353e6a34c" protocol=ttrpc version=3 Jan 27 05:39:20.788513 containerd[1681]: time="2026-01-27T05:39:20.788482482Z" level=info msg="Container d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:39:20.803236 systemd[1]: Started cri-containerd-534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4.scope - libcontainer container 534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4. Jan 27 05:39:20.804146 containerd[1681]: time="2026-01-27T05:39:20.803715792Z" level=info msg="CreateContainer within sandbox \"724b0828ac89ae2dd32688129197a6677b2b34fbb8212bbc250e20b17cab4324\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f\"" Jan 27 05:39:20.804146 containerd[1681]: time="2026-01-27T05:39:20.804019612Z" level=info msg="CreateContainer within sandbox \"099bf0370d69b60a2e29c1189d8f05faef7c6c60eefb80265b6bfdc56519eceb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 27 05:39:20.808339 containerd[1681]: time="2026-01-27T05:39:20.808312773Z" level=info msg="StartContainer for \"d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f\"" Jan 27 05:39:20.812089 containerd[1681]: time="2026-01-27T05:39:20.811404025Z" level=info msg="connecting to shim d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f" address="unix:///run/containerd/s/ffa770c716ab08b1817ab9dae90bd2804b8a7ecf562ed629cae86a5bc97a3afc" protocol=ttrpc version=3 Jan 27 05:39:20.813925 containerd[1681]: time="2026-01-27T05:39:20.813908242Z" level=info msg="Container 859b5c4d8d2ed8300e44f14f778a44e74c68e8ac63ed193afbb7414420f196a5: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:39:20.816000 audit: BPF prog-id=101 op=LOAD Jan 27 05:39:20.817000 audit: BPF prog-id=102 op=LOAD Jan 27 05:39:20.817000 audit[2701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2571 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533346635633938323237333832623933323036623664666530653064 Jan 27 05:39:20.817000 audit: BPF prog-id=102 op=UNLOAD Jan 27 05:39:20.817000 audit[2701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533346635633938323237333832623933323036623664666530653064 Jan 27 05:39:20.818000 audit: BPF prog-id=103 op=LOAD Jan 27 05:39:20.818000 audit[2701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2571 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533346635633938323237333832623933323036623664666530653064 Jan 27 05:39:20.818000 audit: BPF prog-id=104 op=LOAD Jan 27 05:39:20.818000 audit[2701]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2571 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533346635633938323237333832623933323036623664666530653064 Jan 27 05:39:20.818000 audit: BPF prog-id=104 op=UNLOAD Jan 27 05:39:20.818000 audit[2701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533346635633938323237333832623933323036623664666530653064 Jan 27 05:39:20.818000 audit: BPF prog-id=103 op=UNLOAD Jan 27 05:39:20.818000 audit[2701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533346635633938323237333832623933323036623664666530653064 Jan 27 05:39:20.818000 audit: BPF prog-id=105 op=LOAD Jan 27 05:39:20.818000 audit[2701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2571 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533346635633938323237333832623933323036623664666530653064 Jan 27 05:39:20.825010 containerd[1681]: time="2026-01-27T05:39:20.824875720Z" level=info msg="CreateContainer within sandbox \"099bf0370d69b60a2e29c1189d8f05faef7c6c60eefb80265b6bfdc56519eceb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"859b5c4d8d2ed8300e44f14f778a44e74c68e8ac63ed193afbb7414420f196a5\"" Jan 27 05:39:20.826067 containerd[1681]: time="2026-01-27T05:39:20.825927877Z" level=info msg="StartContainer for \"859b5c4d8d2ed8300e44f14f778a44e74c68e8ac63ed193afbb7414420f196a5\"" Jan 27 05:39:20.828743 containerd[1681]: time="2026-01-27T05:39:20.828363360Z" level=info msg="connecting to shim 859b5c4d8d2ed8300e44f14f778a44e74c68e8ac63ed193afbb7414420f196a5" address="unix:///run/containerd/s/4a3ab3362a6a09bfa32d0179eb1c9d71bb4d0651598255d5888ee084602b7c95" protocol=ttrpc version=3 Jan 27 05:39:20.836189 systemd[1]: Started cri-containerd-d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f.scope - libcontainer container d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f. Jan 27 05:39:20.853452 systemd[1]: Started cri-containerd-859b5c4d8d2ed8300e44f14f778a44e74c68e8ac63ed193afbb7414420f196a5.scope - libcontainer container 859b5c4d8d2ed8300e44f14f778a44e74c68e8ac63ed193afbb7414420f196a5. Jan 27 05:39:20.868000 audit: BPF prog-id=106 op=LOAD Jan 27 05:39:20.868000 audit: BPF prog-id=107 op=LOAD Jan 27 05:39:20.868000 audit[2722]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2590 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383838376363643865346630306661366633383564303833613734 Jan 27 05:39:20.868000 audit: BPF prog-id=107 op=UNLOAD Jan 27 05:39:20.868000 audit[2722]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2590 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383838376363643865346630306661366633383564303833613734 Jan 27 05:39:20.869000 audit: BPF prog-id=108 op=LOAD Jan 27 05:39:20.869000 audit[2722]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2590 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383838376363643865346630306661366633383564303833613734 Jan 27 05:39:20.869000 audit: BPF prog-id=109 op=LOAD Jan 27 05:39:20.869000 audit[2722]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2590 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383838376363643865346630306661366633383564303833613734 Jan 27 05:39:20.869000 audit: BPF prog-id=109 op=UNLOAD Jan 27 05:39:20.869000 audit[2722]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2590 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383838376363643865346630306661366633383564303833613734 Jan 27 05:39:20.869000 audit: BPF prog-id=108 op=UNLOAD Jan 27 05:39:20.869000 audit[2722]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2590 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383838376363643865346630306661366633383564303833613734 Jan 27 05:39:20.869000 audit: BPF prog-id=110 op=LOAD Jan 27 05:39:20.869000 audit[2722]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2590 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435383838376363643865346630306661366633383564303833613734 Jan 27 05:39:20.878787 containerd[1681]: time="2026-01-27T05:39:20.878435715Z" level=info msg="StartContainer for \"534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4\" returns successfully" Jan 27 05:39:20.878000 audit: BPF prog-id=111 op=LOAD Jan 27 05:39:20.879000 audit: BPF prog-id=112 op=LOAD Jan 27 05:39:20.879000 audit[2734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2616 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835396235633464386432656438333030653434663134663737386134 Jan 27 05:39:20.880000 audit: BPF prog-id=112 op=UNLOAD Jan 27 05:39:20.880000 audit[2734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2616 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835396235633464386432656438333030653434663134663737386134 Jan 27 05:39:20.880000 audit: BPF prog-id=113 op=LOAD Jan 27 05:39:20.880000 audit[2734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2616 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835396235633464386432656438333030653434663134663737386134 Jan 27 05:39:20.880000 audit: BPF prog-id=114 op=LOAD Jan 27 05:39:20.880000 audit[2734]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2616 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835396235633464386432656438333030653434663134663737386134 Jan 27 05:39:20.880000 audit: BPF prog-id=114 op=UNLOAD Jan 27 05:39:20.880000 audit[2734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2616 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835396235633464386432656438333030653434663134663737386134 Jan 27 05:39:20.880000 audit: BPF prog-id=113 op=UNLOAD Jan 27 05:39:20.880000 audit[2734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2616 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835396235633464386432656438333030653434663134663737386134 Jan 27 05:39:20.880000 audit: BPF prog-id=115 op=LOAD Jan 27 05:39:20.880000 audit[2734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2616 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:20.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835396235633464386432656438333030653434663134663737386134 Jan 27 05:39:20.900272 kubelet[2531]: I0127 05:39:20.900156 2531 kubelet_node_status.go:75] "Attempting to register node" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.900545 kubelet[2531]: E0127 05:39:20.900478 2531 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.7.41:6443/api/v1/nodes\": dial tcp 10.0.7.41:6443: connect: connection refused" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:20.916028 containerd[1681]: time="2026-01-27T05:39:20.915991268Z" level=info msg="StartContainer for \"d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f\" returns successfully" Jan 27 05:39:20.940807 containerd[1681]: time="2026-01-27T05:39:20.940714988Z" level=info msg="StartContainer for \"859b5c4d8d2ed8300e44f14f778a44e74c68e8ac63ed193afbb7414420f196a5\" returns successfully" Jan 27 05:39:21.161950 kubelet[2531]: E0127 05:39:21.161680 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-eb4c5d05b1\" not found" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:21.166867 kubelet[2531]: E0127 05:39:21.166661 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-eb4c5d05b1\" not found" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:21.167296 kubelet[2531]: E0127 05:39:21.167176 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-eb4c5d05b1\" not found" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:21.703057 kubelet[2531]: I0127 05:39:21.703022 2531 kubelet_node_status.go:75] "Attempting to register node" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:22.170085 kubelet[2531]: E0127 05:39:22.169862 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-eb4c5d05b1\" not found" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:22.170085 kubelet[2531]: E0127 05:39:22.169928 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-eb4c5d05b1\" not found" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:23.151074 kubelet[2531]: I0127 05:39:23.150362 2531 kubelet_node_status.go:78] "Successfully registered node" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:23.151074 kubelet[2531]: E0127 05:39:23.150400 2531 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4592-0-0-n-eb4c5d05b1\": node \"ci-4592-0-0-n-eb4c5d05b1\" not found" Jan 27 05:39:23.170308 kubelet[2531]: I0127 05:39:23.170289 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:23.175877 kubelet[2531]: E0127 05:39:23.175833 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4592-0-0-n-eb4c5d05b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:23.225511 kubelet[2531]: I0127 05:39:23.225474 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:23.228019 kubelet[2531]: E0127 05:39:23.227892 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4592-0-0-n-eb4c5d05b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:23.228019 kubelet[2531]: I0127 05:39:23.227917 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:23.229926 kubelet[2531]: E0127 05:39:23.229801 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4592-0-0-n-eb4c5d05b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:23.229926 kubelet[2531]: I0127 05:39:23.229819 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:23.231836 kubelet[2531]: E0127 05:39:23.231806 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:24.111877 kubelet[2531]: I0127 05:39:24.111650 2531 apiserver.go:52] "Watching apiserver" Jan 27 05:39:24.124430 kubelet[2531]: I0127 05:39:24.124306 2531 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 27 05:39:24.859326 systemd[1]: Reload requested from client PID 2798 ('systemctl') (unit session-8.scope)... Jan 27 05:39:24.859605 systemd[1]: Reloading... Jan 27 05:39:24.951133 zram_generator::config[2843]: No configuration found. Jan 27 05:39:25.068056 kubelet[2531]: I0127 05:39:25.066702 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.160103 kubelet[2531]: I0127 05:39:25.158646 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.165683 systemd[1]: Reloading finished in 305 ms. Jan 27 05:39:25.189876 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:39:25.204481 systemd[1]: kubelet.service: Deactivated successfully. Jan 27 05:39:25.204830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:39:25.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:25.204941 systemd[1]: kubelet.service: Consumed 618ms CPU time, 130.2M memory peak. Jan 27 05:39:25.205653 kernel: kauditd_printk_skb: 159 callbacks suppressed Jan 27 05:39:25.205702 kernel: audit: type=1131 audit(1769492365.203:402): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:25.211055 kernel: audit: type=1334 audit(1769492365.207:403): prog-id=116 op=LOAD Jan 27 05:39:25.207000 audit: BPF prog-id=116 op=LOAD Jan 27 05:39:25.208316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:39:25.207000 audit: BPF prog-id=83 op=UNLOAD Jan 27 05:39:25.212379 kernel: audit: type=1334 audit(1769492365.207:404): prog-id=83 op=UNLOAD Jan 27 05:39:25.214904 kernel: audit: type=1334 audit(1769492365.207:405): prog-id=117 op=LOAD Jan 27 05:39:25.214948 kernel: audit: type=1334 audit(1769492365.207:406): prog-id=118 op=LOAD Jan 27 05:39:25.207000 audit: BPF prog-id=117 op=LOAD Jan 27 05:39:25.207000 audit: BPF prog-id=118 op=LOAD Jan 27 05:39:25.207000 audit: BPF prog-id=84 op=UNLOAD Jan 27 05:39:25.207000 audit: BPF prog-id=85 op=UNLOAD Jan 27 05:39:25.211000 audit: BPF prog-id=119 op=LOAD Jan 27 05:39:25.211000 audit: BPF prog-id=80 op=UNLOAD Jan 27 05:39:25.212000 audit: BPF prog-id=120 op=LOAD Jan 27 05:39:25.212000 audit: BPF prog-id=121 op=LOAD Jan 27 05:39:25.212000 audit: BPF prog-id=81 op=UNLOAD Jan 27 05:39:25.212000 audit: BPF prog-id=82 op=UNLOAD Jan 27 05:39:25.213000 audit: BPF prog-id=122 op=LOAD Jan 27 05:39:25.213000 audit: BPF prog-id=70 op=UNLOAD Jan 27 05:39:25.213000 audit: BPF prog-id=123 op=LOAD Jan 27 05:39:25.213000 audit: BPF prog-id=124 op=LOAD Jan 27 05:39:25.213000 audit: BPF prog-id=71 op=UNLOAD Jan 27 05:39:25.213000 audit: BPF prog-id=72 op=UNLOAD Jan 27 05:39:25.218049 kernel: audit: type=1334 audit(1769492365.207:407): prog-id=84 op=UNLOAD Jan 27 05:39:25.218084 kernel: audit: type=1334 audit(1769492365.207:408): prog-id=85 op=UNLOAD Jan 27 05:39:25.218104 kernel: audit: type=1334 audit(1769492365.211:409): prog-id=119 op=LOAD Jan 27 05:39:25.218124 kernel: audit: type=1334 audit(1769492365.211:410): prog-id=80 op=UNLOAD Jan 27 05:39:25.218142 kernel: audit: type=1334 audit(1769492365.212:411): prog-id=120 op=LOAD Jan 27 05:39:25.215000 audit: BPF prog-id=125 op=LOAD Jan 27 05:39:25.215000 audit: BPF prog-id=69 op=UNLOAD Jan 27 05:39:25.216000 audit: BPF prog-id=126 op=LOAD Jan 27 05:39:25.216000 audit: BPF prog-id=76 op=UNLOAD Jan 27 05:39:25.216000 audit: BPF prog-id=127 op=LOAD Jan 27 05:39:25.216000 audit: BPF prog-id=128 op=LOAD Jan 27 05:39:25.216000 audit: BPF prog-id=77 op=UNLOAD Jan 27 05:39:25.216000 audit: BPF prog-id=78 op=UNLOAD Jan 27 05:39:25.217000 audit: BPF prog-id=129 op=LOAD Jan 27 05:39:25.217000 audit: BPF prog-id=79 op=UNLOAD Jan 27 05:39:25.218000 audit: BPF prog-id=130 op=LOAD Jan 27 05:39:25.218000 audit: BPF prog-id=73 op=UNLOAD Jan 27 05:39:25.218000 audit: BPF prog-id=131 op=LOAD Jan 27 05:39:25.218000 audit: BPF prog-id=132 op=LOAD Jan 27 05:39:25.218000 audit: BPF prog-id=74 op=UNLOAD Jan 27 05:39:25.218000 audit: BPF prog-id=75 op=UNLOAD Jan 27 05:39:25.219000 audit: BPF prog-id=133 op=LOAD Jan 27 05:39:25.219000 audit: BPF prog-id=134 op=LOAD Jan 27 05:39:25.219000 audit: BPF prog-id=67 op=UNLOAD Jan 27 05:39:25.219000 audit: BPF prog-id=68 op=UNLOAD Jan 27 05:39:25.219000 audit: BPF prog-id=135 op=LOAD Jan 27 05:39:25.219000 audit: BPF prog-id=66 op=UNLOAD Jan 27 05:39:25.392625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:39:25.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:25.400431 (kubelet)[2895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 27 05:39:25.453017 kubelet[2895]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 05:39:25.453017 kubelet[2895]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 27 05:39:25.453017 kubelet[2895]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 05:39:25.454187 kubelet[2895]: I0127 05:39:25.453044 2895 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 05:39:25.462678 kubelet[2895]: I0127 05:39:25.462636 2895 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 27 05:39:25.462678 kubelet[2895]: I0127 05:39:25.462661 2895 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 05:39:25.462932 kubelet[2895]: I0127 05:39:25.462915 2895 server.go:954] "Client rotation is on, will bootstrap in background" Jan 27 05:39:25.464201 kubelet[2895]: I0127 05:39:25.464185 2895 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 05:39:25.466859 kubelet[2895]: I0127 05:39:25.466826 2895 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 27 05:39:25.469917 kubelet[2895]: I0127 05:39:25.469904 2895 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 05:39:25.473807 kubelet[2895]: I0127 05:39:25.473077 2895 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 27 05:39:25.473807 kubelet[2895]: I0127 05:39:25.473273 2895 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 05:39:25.473807 kubelet[2895]: I0127 05:39:25.473292 2895 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4592-0-0-n-eb4c5d05b1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 05:39:25.473807 kubelet[2895]: I0127 05:39:25.473560 2895 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 05:39:25.473973 kubelet[2895]: I0127 05:39:25.473570 2895 container_manager_linux.go:304] "Creating device plugin manager" Jan 27 05:39:25.473973 kubelet[2895]: I0127 05:39:25.473618 2895 state_mem.go:36] "Initialized new in-memory state store" Jan 27 05:39:25.474060 kubelet[2895]: I0127 05:39:25.474051 2895 kubelet.go:446] "Attempting to sync node with API server" Jan 27 05:39:25.474111 kubelet[2895]: I0127 05:39:25.474105 2895 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 05:39:25.474164 kubelet[2895]: I0127 05:39:25.474160 2895 kubelet.go:352] "Adding apiserver pod source" Jan 27 05:39:25.474203 kubelet[2895]: I0127 05:39:25.474198 2895 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 05:39:25.478633 kubelet[2895]: I0127 05:39:25.478617 2895 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 27 05:39:25.479784 kubelet[2895]: I0127 05:39:25.479115 2895 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 05:39:25.480237 kubelet[2895]: I0127 05:39:25.480223 2895 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 27 05:39:25.480317 kubelet[2895]: I0127 05:39:25.480311 2895 server.go:1287] "Started kubelet" Jan 27 05:39:25.482984 kubelet[2895]: I0127 05:39:25.482971 2895 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 05:39:25.492373 kubelet[2895]: I0127 05:39:25.492317 2895 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 05:39:25.493943 kubelet[2895]: I0127 05:39:25.493917 2895 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 27 05:39:25.494870 kubelet[2895]: I0127 05:39:25.494856 2895 server.go:479] "Adding debug handlers to kubelet server" Jan 27 05:39:25.495675 kubelet[2895]: I0127 05:39:25.495633 2895 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 05:39:25.495812 kubelet[2895]: I0127 05:39:25.495802 2895 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 05:39:25.497620 kubelet[2895]: I0127 05:39:25.497381 2895 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 27 05:39:25.497885 kubelet[2895]: E0127 05:39:25.497871 2895 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4592-0-0-n-eb4c5d05b1\" not found" Jan 27 05:39:25.499796 kubelet[2895]: I0127 05:39:25.499783 2895 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 27 05:39:25.499949 kubelet[2895]: I0127 05:39:25.499942 2895 reconciler.go:26] "Reconciler: start to sync state" Jan 27 05:39:25.501701 kubelet[2895]: I0127 05:39:25.501663 2895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 05:39:25.503134 kubelet[2895]: I0127 05:39:25.503120 2895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 05:39:25.503219 kubelet[2895]: I0127 05:39:25.503213 2895 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 27 05:39:25.503268 kubelet[2895]: I0127 05:39:25.503263 2895 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 27 05:39:25.503301 kubelet[2895]: I0127 05:39:25.503296 2895 kubelet.go:2382] "Starting kubelet main sync loop" Jan 27 05:39:25.503384 kubelet[2895]: E0127 05:39:25.503368 2895 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 05:39:25.508232 kubelet[2895]: I0127 05:39:25.508206 2895 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 27 05:39:25.510566 kubelet[2895]: E0127 05:39:25.510544 2895 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 27 05:39:25.512444 kubelet[2895]: I0127 05:39:25.512264 2895 factory.go:221] Registration of the containerd container factory successfully Jan 27 05:39:25.512444 kubelet[2895]: I0127 05:39:25.512279 2895 factory.go:221] Registration of the systemd container factory successfully Jan 27 05:39:25.560136 kubelet[2895]: I0127 05:39:25.560117 2895 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 27 05:39:25.560268 kubelet[2895]: I0127 05:39:25.560259 2895 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 27 05:39:25.560314 kubelet[2895]: I0127 05:39:25.560308 2895 state_mem.go:36] "Initialized new in-memory state store" Jan 27 05:39:25.560484 kubelet[2895]: I0127 05:39:25.560475 2895 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 27 05:39:25.560535 kubelet[2895]: I0127 05:39:25.560517 2895 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 27 05:39:25.560566 kubelet[2895]: I0127 05:39:25.560562 2895 policy_none.go:49] "None policy: Start" Jan 27 05:39:25.560742 kubelet[2895]: I0127 05:39:25.560600 2895 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 27 05:39:25.560742 kubelet[2895]: I0127 05:39:25.560609 2895 state_mem.go:35] "Initializing new in-memory state store" Jan 27 05:39:25.560742 kubelet[2895]: I0127 05:39:25.560697 2895 state_mem.go:75] "Updated machine memory state" Jan 27 05:39:25.564235 kubelet[2895]: I0127 05:39:25.564221 2895 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 05:39:25.564439 kubelet[2895]: I0127 05:39:25.564430 2895 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 05:39:25.564598 kubelet[2895]: I0127 05:39:25.564574 2895 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 05:39:25.564788 kubelet[2895]: I0127 05:39:25.564780 2895 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 05:39:25.568299 kubelet[2895]: E0127 05:39:25.568286 2895 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 27 05:39:25.604692 kubelet[2895]: I0127 05:39:25.604654 2895 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.605098 kubelet[2895]: I0127 05:39:25.604672 2895 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.605559 kubelet[2895]: I0127 05:39:25.604770 2895 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.614104 kubelet[2895]: E0127 05:39:25.614084 2895 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4592-0-0-n-eb4c5d05b1\" already exists" pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.614288 kubelet[2895]: E0127 05:39:25.614277 2895 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" already exists" pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.668977 kubelet[2895]: I0127 05:39:25.668556 2895 kubelet_node_status.go:75] "Attempting to register node" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.678236 kubelet[2895]: I0127 05:39:25.678215 2895 kubelet_node_status.go:124] "Node was previously registered" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.678392 kubelet[2895]: I0127 05:39:25.678384 2895 kubelet_node_status.go:78] "Successfully registered node" node="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.801658 kubelet[2895]: I0127 05:39:25.801582 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75bcf10f02a58c1e2b6f37ff11a1481d-ca-certs\") pod \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"75bcf10f02a58c1e2b6f37ff11a1481d\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.802472 kubelet[2895]: I0127 05:39:25.802104 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75bcf10f02a58c1e2b6f37ff11a1481d-kubeconfig\") pod \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"75bcf10f02a58c1e2b6f37ff11a1481d\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.802472 kubelet[2895]: I0127 05:39:25.802180 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75bcf10f02a58c1e2b6f37ff11a1481d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"75bcf10f02a58c1e2b6f37ff11a1481d\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.802472 kubelet[2895]: I0127 05:39:25.802246 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/21a9838d1587ce67e1583dca6b6d8e19-kubeconfig\") pod \"kube-scheduler-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"21a9838d1587ce67e1583dca6b6d8e19\") " pod="kube-system/kube-scheduler-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.802472 kubelet[2895]: I0127 05:39:25.802285 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407560494481a0ed4e30f3d50b60939f-ca-certs\") pod \"kube-apiserver-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"407560494481a0ed4e30f3d50b60939f\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.802472 kubelet[2895]: I0127 05:39:25.802327 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407560494481a0ed4e30f3d50b60939f-k8s-certs\") pod \"kube-apiserver-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"407560494481a0ed4e30f3d50b60939f\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.803227 kubelet[2895]: I0127 05:39:25.802353 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407560494481a0ed4e30f3d50b60939f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"407560494481a0ed4e30f3d50b60939f\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.803227 kubelet[2895]: I0127 05:39:25.802376 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/75bcf10f02a58c1e2b6f37ff11a1481d-flexvolume-dir\") pod \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"75bcf10f02a58c1e2b6f37ff11a1481d\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:25.803227 kubelet[2895]: I0127 05:39:25.802397 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75bcf10f02a58c1e2b6f37ff11a1481d-k8s-certs\") pod \"kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1\" (UID: \"75bcf10f02a58c1e2b6f37ff11a1481d\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:26.482746 kubelet[2895]: I0127 05:39:26.482702 2895 apiserver.go:52] "Watching apiserver" Jan 27 05:39:26.500823 kubelet[2895]: I0127 05:39:26.500142 2895 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 27 05:39:26.526171 kubelet[2895]: I0127 05:39:26.526105 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" podStartSLOduration=1.525946347 podStartE2EDuration="1.525946347s" podCreationTimestamp="2026-01-27 05:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:39:26.525826456 +0000 UTC m=+1.120325016" watchObservedRunningTime="2026-01-27 05:39:26.525946347 +0000 UTC m=+1.120444877" Jan 27 05:39:26.544183 kubelet[2895]: I0127 05:39:26.544152 2895 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:26.545843 kubelet[2895]: I0127 05:39:26.545490 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4592-0-0-n-eb4c5d05b1" podStartSLOduration=1.54547618 podStartE2EDuration="1.54547618s" podCreationTimestamp="2026-01-27 05:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:39:26.535504557 +0000 UTC m=+1.130003101" watchObservedRunningTime="2026-01-27 05:39:26.54547618 +0000 UTC m=+1.139974728" Jan 27 05:39:26.553899 kubelet[2895]: E0127 05:39:26.553870 2895 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4592-0-0-n-eb4c5d05b1\" already exists" pod="kube-system/kube-apiserver-ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:39:26.562229 kubelet[2895]: I0127 05:39:26.562189 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" podStartSLOduration=1.56217499 podStartE2EDuration="1.56217499s" podCreationTimestamp="2026-01-27 05:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:39:26.546421737 +0000 UTC m=+1.140920289" watchObservedRunningTime="2026-01-27 05:39:26.56217499 +0000 UTC m=+1.156673538" Jan 27 05:39:29.400774 kubelet[2895]: I0127 05:39:29.400737 2895 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 27 05:39:29.401383 containerd[1681]: time="2026-01-27T05:39:29.401355312Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 27 05:39:29.401612 kubelet[2895]: I0127 05:39:29.401595 2895 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 27 05:39:30.341539 systemd[1]: Created slice kubepods-besteffort-pode05e7423_fead_48e1_b4fe_3a357db168b5.slice - libcontainer container kubepods-besteffort-pode05e7423_fead_48e1_b4fe_3a357db168b5.slice. Jan 27 05:39:30.432418 kubelet[2895]: I0127 05:39:30.432371 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e05e7423-fead-48e1-b4fe-3a357db168b5-xtables-lock\") pod \"kube-proxy-tqcw2\" (UID: \"e05e7423-fead-48e1-b4fe-3a357db168b5\") " pod="kube-system/kube-proxy-tqcw2" Jan 27 05:39:30.432963 kubelet[2895]: I0127 05:39:30.432845 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbmcn\" (UniqueName: \"kubernetes.io/projected/e05e7423-fead-48e1-b4fe-3a357db168b5-kube-api-access-xbmcn\") pod \"kube-proxy-tqcw2\" (UID: \"e05e7423-fead-48e1-b4fe-3a357db168b5\") " pod="kube-system/kube-proxy-tqcw2" Jan 27 05:39:30.432963 kubelet[2895]: I0127 05:39:30.432889 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e05e7423-fead-48e1-b4fe-3a357db168b5-kube-proxy\") pod \"kube-proxy-tqcw2\" (UID: \"e05e7423-fead-48e1-b4fe-3a357db168b5\") " pod="kube-system/kube-proxy-tqcw2" Jan 27 05:39:30.432963 kubelet[2895]: I0127 05:39:30.432911 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e05e7423-fead-48e1-b4fe-3a357db168b5-lib-modules\") pod \"kube-proxy-tqcw2\" (UID: \"e05e7423-fead-48e1-b4fe-3a357db168b5\") " pod="kube-system/kube-proxy-tqcw2" Jan 27 05:39:30.560883 systemd[1]: Created slice kubepods-besteffort-pod52bb413b_3cc0_438f_b413_67e119c636e5.slice - libcontainer container kubepods-besteffort-pod52bb413b_3cc0_438f_b413_67e119c636e5.slice. Jan 27 05:39:30.635347 kubelet[2895]: I0127 05:39:30.635205 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/52bb413b-3cc0-438f-b413-67e119c636e5-var-lib-calico\") pod \"tigera-operator-7dcd859c48-msnr7\" (UID: \"52bb413b-3cc0-438f-b413-67e119c636e5\") " pod="tigera-operator/tigera-operator-7dcd859c48-msnr7" Jan 27 05:39:30.635347 kubelet[2895]: I0127 05:39:30.635262 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv697\" (UniqueName: \"kubernetes.io/projected/52bb413b-3cc0-438f-b413-67e119c636e5-kube-api-access-pv697\") pod \"tigera-operator-7dcd859c48-msnr7\" (UID: \"52bb413b-3cc0-438f-b413-67e119c636e5\") " pod="tigera-operator/tigera-operator-7dcd859c48-msnr7" Jan 27 05:39:30.651133 containerd[1681]: time="2026-01-27T05:39:30.651091595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqcw2,Uid:e05e7423-fead-48e1-b4fe-3a357db168b5,Namespace:kube-system,Attempt:0,}" Jan 27 05:39:30.706220 containerd[1681]: time="2026-01-27T05:39:30.706159052Z" level=info msg="connecting to shim 92a300a5bd8d1cf7fd99f0f52deb82a12b0044d819001cf8bf0811719c1443ca" address="unix:///run/containerd/s/7d166fa390db96f5aacbb5013826ddc712cc4fc8602ba41c7c57dbefc40be60f" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:39:30.745552 systemd[1]: Started cri-containerd-92a300a5bd8d1cf7fd99f0f52deb82a12b0044d819001cf8bf0811719c1443ca.scope - libcontainer container 92a300a5bd8d1cf7fd99f0f52deb82a12b0044d819001cf8bf0811719c1443ca. Jan 27 05:39:30.760000 audit: BPF prog-id=136 op=LOAD Jan 27 05:39:30.762452 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 27 05:39:30.762494 kernel: audit: type=1334 audit(1769492370.760:444): prog-id=136 op=LOAD Jan 27 05:39:30.763000 audit: BPF prog-id=137 op=LOAD Jan 27 05:39:30.765471 kernel: audit: type=1334 audit(1769492370.763:445): prog-id=137 op=LOAD Jan 27 05:39:30.763000 audit[2957]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.768140 kernel: audit: type=1300 audit(1769492370.763:445): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613330306135626438643163663766643939663066353264656238 Jan 27 05:39:30.772893 kernel: audit: type=1327 audit(1769492370.763:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613330306135626438643163663766643939663066353264656238 Jan 27 05:39:30.763000 audit: BPF prog-id=137 op=UNLOAD Jan 27 05:39:30.776719 kernel: audit: type=1334 audit(1769492370.763:446): prog-id=137 op=UNLOAD Jan 27 05:39:30.763000 audit[2957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.779664 kernel: audit: type=1300 audit(1769492370.763:446): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613330306135626438643163663766643939663066353264656238 Jan 27 05:39:30.763000 audit: BPF prog-id=138 op=LOAD Jan 27 05:39:30.787081 kernel: audit: type=1327 audit(1769492370.763:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613330306135626438643163663766643939663066353264656238 Jan 27 05:39:30.787207 kernel: audit: type=1334 audit(1769492370.763:447): prog-id=138 op=LOAD Jan 27 05:39:30.763000 audit[2957]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.789837 kernel: audit: type=1300 audit(1769492370.763:447): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613330306135626438643163663766643939663066353264656238 Jan 27 05:39:30.794566 kernel: audit: type=1327 audit(1769492370.763:447): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613330306135626438643163663766643939663066353264656238 Jan 27 05:39:30.763000 audit: BPF prog-id=139 op=LOAD Jan 27 05:39:30.763000 audit[2957]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613330306135626438643163663766643939663066353264656238 Jan 27 05:39:30.763000 audit: BPF prog-id=139 op=UNLOAD Jan 27 05:39:30.763000 audit[2957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613330306135626438643163663766643939663066353264656238 Jan 27 05:39:30.763000 audit: BPF prog-id=138 op=UNLOAD Jan 27 05:39:30.763000 audit[2957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613330306135626438643163663766643939663066353264656238 Jan 27 05:39:30.764000 audit: BPF prog-id=140 op=LOAD Jan 27 05:39:30.764000 audit[2957]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2946 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932613330306135626438643163663766643939663066353264656238 Jan 27 05:39:30.801224 containerd[1681]: time="2026-01-27T05:39:30.801191299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqcw2,Uid:e05e7423-fead-48e1-b4fe-3a357db168b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"92a300a5bd8d1cf7fd99f0f52deb82a12b0044d819001cf8bf0811719c1443ca\"" Jan 27 05:39:30.805119 containerd[1681]: time="2026-01-27T05:39:30.805059924Z" level=info msg="CreateContainer within sandbox \"92a300a5bd8d1cf7fd99f0f52deb82a12b0044d819001cf8bf0811719c1443ca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 27 05:39:30.822588 containerd[1681]: time="2026-01-27T05:39:30.821633793Z" level=info msg="Container a9dfceaaed6c14103010e4b0c484f227eaaa214b46c3bd0db56d262deec5b1c8: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:39:30.834012 containerd[1681]: time="2026-01-27T05:39:30.833981904Z" level=info msg="CreateContainer within sandbox \"92a300a5bd8d1cf7fd99f0f52deb82a12b0044d819001cf8bf0811719c1443ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a9dfceaaed6c14103010e4b0c484f227eaaa214b46c3bd0db56d262deec5b1c8\"" Jan 27 05:39:30.834691 containerd[1681]: time="2026-01-27T05:39:30.834669241Z" level=info msg="StartContainer for \"a9dfceaaed6c14103010e4b0c484f227eaaa214b46c3bd0db56d262deec5b1c8\"" Jan 27 05:39:30.835953 containerd[1681]: time="2026-01-27T05:39:30.835831323Z" level=info msg="connecting to shim a9dfceaaed6c14103010e4b0c484f227eaaa214b46c3bd0db56d262deec5b1c8" address="unix:///run/containerd/s/7d166fa390db96f5aacbb5013826ddc712cc4fc8602ba41c7c57dbefc40be60f" protocol=ttrpc version=3 Jan 27 05:39:30.858209 systemd[1]: Started cri-containerd-a9dfceaaed6c14103010e4b0c484f227eaaa214b46c3bd0db56d262deec5b1c8.scope - libcontainer container a9dfceaaed6c14103010e4b0c484f227eaaa214b46c3bd0db56d262deec5b1c8. Jan 27 05:39:30.871857 containerd[1681]: time="2026-01-27T05:39:30.871818673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-msnr7,Uid:52bb413b-3cc0-438f-b413-67e119c636e5,Namespace:tigera-operator,Attempt:0,}" Jan 27 05:39:30.892000 audit: BPF prog-id=141 op=LOAD Jan 27 05:39:30.892000 audit[2984]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2946 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139646663656161656436633134313033303130653462306334383466 Jan 27 05:39:30.892000 audit: BPF prog-id=142 op=LOAD Jan 27 05:39:30.892000 audit[2984]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2946 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139646663656161656436633134313033303130653462306334383466 Jan 27 05:39:30.892000 audit: BPF prog-id=142 op=UNLOAD Jan 27 05:39:30.892000 audit[2984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139646663656161656436633134313033303130653462306334383466 Jan 27 05:39:30.892000 audit: BPF prog-id=141 op=UNLOAD Jan 27 05:39:30.892000 audit[2984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2946 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139646663656161656436633134313033303130653462306334383466 Jan 27 05:39:30.892000 audit: BPF prog-id=143 op=LOAD Jan 27 05:39:30.892000 audit[2984]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2946 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139646663656161656436633134313033303130653462306334383466 Jan 27 05:39:30.906710 containerd[1681]: time="2026-01-27T05:39:30.906677437Z" level=info msg="connecting to shim ebd83958ea20ea74b4c1b8a0875638ecf2559ce38452a228618bb00ea717a8a9" address="unix:///run/containerd/s/49bc2908d2dcc38764c3306e1a971d8802cc87595b2240df52cd5c4d6406d03c" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:39:30.918193 containerd[1681]: time="2026-01-27T05:39:30.918154713Z" level=info msg="StartContainer for \"a9dfceaaed6c14103010e4b0c484f227eaaa214b46c3bd0db56d262deec5b1c8\" returns successfully" Jan 27 05:39:30.936248 systemd[1]: Started cri-containerd-ebd83958ea20ea74b4c1b8a0875638ecf2559ce38452a228618bb00ea717a8a9.scope - libcontainer container ebd83958ea20ea74b4c1b8a0875638ecf2559ce38452a228618bb00ea717a8a9. Jan 27 05:39:30.951000 audit: BPF prog-id=144 op=LOAD Jan 27 05:39:30.952000 audit: BPF prog-id=145 op=LOAD Jan 27 05:39:30.952000 audit[3030]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174238 a2=98 a3=0 items=0 ppid=3012 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643833393538656132306561373462346331623861303837353633 Jan 27 05:39:30.952000 audit: BPF prog-id=145 op=UNLOAD Jan 27 05:39:30.952000 audit[3030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643833393538656132306561373462346331623861303837353633 Jan 27 05:39:30.952000 audit: BPF prog-id=146 op=LOAD Jan 27 05:39:30.952000 audit[3030]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174488 a2=98 a3=0 items=0 ppid=3012 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643833393538656132306561373462346331623861303837353633 Jan 27 05:39:30.952000 audit: BPF prog-id=147 op=LOAD Jan 27 05:39:30.952000 audit[3030]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000174218 a2=98 a3=0 items=0 ppid=3012 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643833393538656132306561373462346331623861303837353633 Jan 27 05:39:30.952000 audit: BPF prog-id=147 op=UNLOAD Jan 27 05:39:30.952000 audit[3030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643833393538656132306561373462346331623861303837353633 Jan 27 05:39:30.952000 audit: BPF prog-id=146 op=UNLOAD Jan 27 05:39:30.952000 audit[3030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643833393538656132306561373462346331623861303837353633 Jan 27 05:39:30.952000 audit: BPF prog-id=148 op=LOAD Jan 27 05:39:30.952000 audit[3030]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001746e8 a2=98 a3=0 items=0 ppid=3012 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:30.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643833393538656132306561373462346331623861303837353633 Jan 27 05:39:30.995135 containerd[1681]: time="2026-01-27T05:39:30.995079230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-msnr7,Uid:52bb413b-3cc0-438f-b413-67e119c636e5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ebd83958ea20ea74b4c1b8a0875638ecf2559ce38452a228618bb00ea717a8a9\"" Jan 27 05:39:30.997218 containerd[1681]: time="2026-01-27T05:39:30.997182643Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 27 05:39:31.038000 audit[3094]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.038000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3ec25700 a2=0 a3=7ffc3ec256ec items=0 ppid=2997 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 27 05:39:31.039000 audit[3096]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.039000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6578b350 a2=0 a3=7ffd6578b33c items=0 ppid=2997 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 27 05:39:31.040000 audit[3097]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.040000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebba03210 a2=0 a3=7ffebba031fc items=0 ppid=2997 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.040000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 27 05:39:31.041000 audit[3098]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.041000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc94eee310 a2=0 a3=7ffc94eee2fc items=0 ppid=2997 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.041000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 27 05:39:31.041000 audit[3099]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.041000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3433af90 a2=0 a3=7ffc3433af7c items=0 ppid=2997 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.041000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 27 05:39:31.042000 audit[3100]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.042000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2604ed30 a2=0 a3=7ffe2604ed1c items=0 ppid=2997 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 27 05:39:31.145000 audit[3101]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.145000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc156c36b0 a2=0 a3=7ffc156c369c items=0 ppid=2997 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 27 05:39:31.149000 audit[3103]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.149000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe2647ee50 a2=0 a3=7ffe2647ee3c items=0 ppid=2997 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 27 05:39:31.154000 audit[3106]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.154000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd3b4365c0 a2=0 a3=7ffd3b4365ac items=0 ppid=2997 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 27 05:39:31.156000 audit[3107]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.156000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf89f4640 a2=0 a3=7ffdf89f462c items=0 ppid=2997 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.156000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 27 05:39:31.161000 audit[3109]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.161000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0a510ce0 a2=0 a3=7ffd0a510ccc items=0 ppid=2997 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 27 05:39:31.164000 audit[3110]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.164000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4d53ccb0 a2=0 a3=7ffd4d53cc9c items=0 ppid=2997 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 27 05:39:31.168000 audit[3112]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.168000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff4994fde0 a2=0 a3=7fff4994fdcc items=0 ppid=2997 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.168000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 27 05:39:31.172000 audit[3115]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.172000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc3d4c63a0 a2=0 a3=7ffc3d4c638c items=0 ppid=2997 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 27 05:39:31.174000 audit[3116]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.174000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8a145670 a2=0 a3=7ffd8a14565c items=0 ppid=2997 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 27 05:39:31.176000 audit[3118]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.176000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe082fb680 a2=0 a3=7ffe082fb66c items=0 ppid=2997 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 27 05:39:31.177000 audit[3119]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.177000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb0aa1d20 a2=0 a3=7ffcb0aa1d0c items=0 ppid=2997 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 27 05:39:31.180000 audit[3121]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.180000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe13a60640 a2=0 a3=7ffe13a6062c items=0 ppid=2997 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.180000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 27 05:39:31.183000 audit[3124]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.183000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffaa4147f0 a2=0 a3=7fffaa4147dc items=0 ppid=2997 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 27 05:39:31.186000 audit[3127]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.186000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff99b7d780 a2=0 a3=7fff99b7d76c items=0 ppid=2997 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.186000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 27 05:39:31.187000 audit[3128]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.187000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffbf936720 a2=0 a3=7fffbf93670c items=0 ppid=2997 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 27 05:39:31.190000 audit[3130]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.190000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff555c3cc0 a2=0 a3=7fff555c3cac items=0 ppid=2997 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.190000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:39:31.193000 audit[3133]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.193000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeac5a5cd0 a2=0 a3=7ffeac5a5cbc items=0 ppid=2997 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:39:31.194000 audit[3134]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.194000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8dd3fe70 a2=0 a3=7ffd8dd3fe5c items=0 ppid=2997 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 27 05:39:31.197000 audit[3136]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:39:31.197000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe012c2030 a2=0 a3=7ffe012c201c items=0 ppid=2997 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 27 05:39:31.218000 audit[3142]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:31.218000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff81288980 a2=0 a3=7fff8128896c items=0 ppid=2997 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:31.227000 audit[3142]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:31.227000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff81288980 a2=0 a3=7fff8128896c items=0 ppid=2997 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.227000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:31.229000 audit[3147]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.229000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffebb19d470 a2=0 a3=7ffebb19d45c items=0 ppid=2997 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.229000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 27 05:39:31.232000 audit[3149]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.232000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffce63f3f10 a2=0 a3=7ffce63f3efc items=0 ppid=2997 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 27 05:39:31.235000 audit[3152]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.235000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd000a2c60 a2=0 a3=7ffd000a2c4c items=0 ppid=2997 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.235000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 27 05:39:31.237000 audit[3153]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.237000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffba65be90 a2=0 a3=7fffba65be7c items=0 ppid=2997 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.237000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 27 05:39:31.239000 audit[3155]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.239000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe82ea70b0 a2=0 a3=7ffe82ea709c items=0 ppid=2997 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 27 05:39:31.240000 audit[3156]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.240000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6d8dc1b0 a2=0 a3=7ffd6d8dc19c items=0 ppid=2997 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.240000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 27 05:39:31.242000 audit[3158]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.242000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff0d0c4d80 a2=0 a3=7fff0d0c4d6c items=0 ppid=2997 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 27 05:39:31.246000 audit[3161]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.246000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff9c5ba1a0 a2=0 a3=7fff9c5ba18c items=0 ppid=2997 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.246000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 27 05:39:31.247000 audit[3162]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.247000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2fa2e280 a2=0 a3=7ffd2fa2e26c items=0 ppid=2997 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.247000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 27 05:39:31.249000 audit[3164]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.249000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca382d240 a2=0 a3=7ffca382d22c items=0 ppid=2997 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.249000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 27 05:39:31.250000 audit[3165]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.250000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb85c3df0 a2=0 a3=7ffeb85c3ddc items=0 ppid=2997 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.250000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 27 05:39:31.253000 audit[3167]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.253000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde66dfea0 a2=0 a3=7ffde66dfe8c items=0 ppid=2997 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.253000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 27 05:39:31.256000 audit[3170]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.256000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcaa424f0 a2=0 a3=7fffcaa424dc items=0 ppid=2997 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 27 05:39:31.259000 audit[3173]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.259000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff751e8620 a2=0 a3=7fff751e860c items=0 ppid=2997 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 27 05:39:31.260000 audit[3174]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.260000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffec923abe0 a2=0 a3=7ffec923abcc items=0 ppid=2997 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.260000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 27 05:39:31.263000 audit[3176]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.263000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe423ccde0 a2=0 a3=7ffe423ccdcc items=0 ppid=2997 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:39:31.268000 audit[3179]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.268000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5a7d1f50 a2=0 a3=7fff5a7d1f3c items=0 ppid=2997 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.268000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:39:31.269000 audit[3180]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.269000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd598e010 a2=0 a3=7fffd598dffc items=0 ppid=2997 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 27 05:39:31.271000 audit[3182]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.271000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc009bdc50 a2=0 a3=7ffc009bdc3c items=0 ppid=2997 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 27 05:39:31.272000 audit[3183]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.272000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf2bb5fa0 a2=0 a3=7ffcf2bb5f8c items=0 ppid=2997 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 27 05:39:31.274000 audit[3185]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.274000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe567a7080 a2=0 a3=7ffe567a706c items=0 ppid=2997 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 05:39:31.278000 audit[3188]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:39:31.278000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffca4743a90 a2=0 a3=7ffca4743a7c items=0 ppid=2997 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.278000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 05:39:31.284000 audit[3190]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 27 05:39:31.284000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe735dd680 a2=0 a3=7ffe735dd66c items=0 ppid=2997 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.284000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:31.285000 audit[3190]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 27 05:39:31.285000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe735dd680 a2=0 a3=7ffe735dd66c items=0 ppid=2997 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:31.285000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:31.568682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596831235.mount: Deactivated successfully. Jan 27 05:39:32.742551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788108631.mount: Deactivated successfully. Jan 27 05:39:33.236519 containerd[1681]: time="2026-01-27T05:39:33.236470781Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:33.238541 containerd[1681]: time="2026-01-27T05:39:33.238510439Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 27 05:39:33.240332 containerd[1681]: time="2026-01-27T05:39:33.240292404Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:33.242528 containerd[1681]: time="2026-01-27T05:39:33.242476337Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:33.243615 containerd[1681]: time="2026-01-27T05:39:33.243575518Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.246264353s" Jan 27 05:39:33.243615 containerd[1681]: time="2026-01-27T05:39:33.243605376Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 27 05:39:33.248064 containerd[1681]: time="2026-01-27T05:39:33.246666662Z" level=info msg="CreateContainer within sandbox \"ebd83958ea20ea74b4c1b8a0875638ecf2559ce38452a228618bb00ea717a8a9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 27 05:39:33.258489 containerd[1681]: time="2026-01-27T05:39:33.258466896Z" level=info msg="Container 178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:39:33.260963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2627159075.mount: Deactivated successfully. Jan 27 05:39:33.268027 containerd[1681]: time="2026-01-27T05:39:33.267930140Z" level=info msg="CreateContainer within sandbox \"ebd83958ea20ea74b4c1b8a0875638ecf2559ce38452a228618bb00ea717a8a9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e\"" Jan 27 05:39:33.268696 containerd[1681]: time="2026-01-27T05:39:33.268669755Z" level=info msg="StartContainer for \"178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e\"" Jan 27 05:39:33.269713 containerd[1681]: time="2026-01-27T05:39:33.269584093Z" level=info msg="connecting to shim 178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e" address="unix:///run/containerd/s/49bc2908d2dcc38764c3306e1a971d8802cc87595b2240df52cd5c4d6406d03c" protocol=ttrpc version=3 Jan 27 05:39:33.290197 systemd[1]: Started cri-containerd-178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e.scope - libcontainer container 178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e. Jan 27 05:39:33.299000 audit: BPF prog-id=149 op=LOAD Jan 27 05:39:33.300000 audit: BPF prog-id=150 op=LOAD Jan 27 05:39:33.300000 audit[3199]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3012 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:33.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137383135396237613334303835663966623334633333653530393535 Jan 27 05:39:33.300000 audit: BPF prog-id=150 op=UNLOAD Jan 27 05:39:33.300000 audit[3199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:33.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137383135396237613334303835663966623334633333653530393535 Jan 27 05:39:33.300000 audit: BPF prog-id=151 op=LOAD Jan 27 05:39:33.300000 audit[3199]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3012 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:33.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137383135396237613334303835663966623334633333653530393535 Jan 27 05:39:33.300000 audit: BPF prog-id=152 op=LOAD Jan 27 05:39:33.300000 audit[3199]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3012 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:33.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137383135396237613334303835663966623334633333653530393535 Jan 27 05:39:33.300000 audit: BPF prog-id=152 op=UNLOAD Jan 27 05:39:33.300000 audit[3199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:33.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137383135396237613334303835663966623334633333653530393535 Jan 27 05:39:33.300000 audit: BPF prog-id=151 op=UNLOAD Jan 27 05:39:33.300000 audit[3199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:33.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137383135396237613334303835663966623334633333653530393535 Jan 27 05:39:33.300000 audit: BPF prog-id=153 op=LOAD Jan 27 05:39:33.300000 audit[3199]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3012 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:33.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137383135396237613334303835663966623334633333653530393535 Jan 27 05:39:33.317511 containerd[1681]: time="2026-01-27T05:39:33.317471428Z" level=info msg="StartContainer for \"178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e\" returns successfully" Jan 27 05:39:33.571477 kubelet[2895]: I0127 05:39:33.571322 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tqcw2" podStartSLOduration=3.571287491 podStartE2EDuration="3.571287491s" podCreationTimestamp="2026-01-27 05:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:39:31.582916565 +0000 UTC m=+6.177415118" watchObservedRunningTime="2026-01-27 05:39:33.571287491 +0000 UTC m=+8.165786144" Jan 27 05:39:33.572407 kubelet[2895]: I0127 05:39:33.571504 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-msnr7" podStartSLOduration=1.3239344370000001 podStartE2EDuration="3.571492387s" podCreationTimestamp="2026-01-27 05:39:30 +0000 UTC" firstStartedPulling="2026-01-27 05:39:30.996770457 +0000 UTC m=+5.591268988" lastFinishedPulling="2026-01-27 05:39:33.244328402 +0000 UTC m=+7.838826938" observedRunningTime="2026-01-27 05:39:33.570313069 +0000 UTC m=+8.164811617" watchObservedRunningTime="2026-01-27 05:39:33.571492387 +0000 UTC m=+8.165991047" Jan 27 05:39:38.795196 sudo[1946]: pam_unix(sudo:session): session closed for user root Jan 27 05:39:38.799638 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 27 05:39:38.799723 kernel: audit: type=1106 audit(1769492378.795:524): pid=1946 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:38.795000 audit[1946]: USER_END pid=1946 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:38.795000 audit[1946]: CRED_DISP pid=1946 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:38.804060 kernel: audit: type=1104 audit(1769492378.795:525): pid=1946 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:38.895628 sshd[1945]: Connection closed by 4.153.228.146 port 34118 Jan 27 05:39:38.895490 sshd-session[1941]: pam_unix(sshd:session): session closed for user core Jan 27 05:39:38.896000 audit[1941]: USER_END pid=1941 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:38.903671 systemd-logind[1655]: Session 8 logged out. Waiting for processes to exit. Jan 27 05:39:38.904259 kernel: audit: type=1106 audit(1769492378.896:526): pid=1941 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:38.904693 systemd[1]: sshd@6-10.0.7.41:22-4.153.228.146:34118.service: Deactivated successfully. Jan 27 05:39:38.907452 systemd[1]: session-8.scope: Deactivated successfully. Jan 27 05:39:38.907723 systemd[1]: session-8.scope: Consumed 3.911s CPU time, 232.6M memory peak. Jan 27 05:39:38.912364 kernel: audit: type=1104 audit(1769492378.897:527): pid=1941 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:38.897000 audit[1941]: CRED_DISP pid=1941 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:38.914453 systemd-logind[1655]: Removed session 8. Jan 27 05:39:38.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.7.41:22-4.153.228.146:34118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:38.920067 kernel: audit: type=1131 audit(1769492378.904:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.7.41:22-4.153.228.146:34118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:40.001000 audit[3284]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:40.005045 kernel: audit: type=1325 audit(1769492380.001:529): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:40.001000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdc8d80770 a2=0 a3=7ffdc8d8075c items=0 ppid=2997 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:40.010065 kernel: audit: type=1300 audit(1769492380.001:529): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdc8d80770 a2=0 a3=7ffdc8d8075c items=0 ppid=2997 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:40.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:40.014045 kernel: audit: type=1327 audit(1769492380.001:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:40.014000 audit[3284]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:40.018074 kernel: audit: type=1325 audit(1769492380.014:530): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:40.014000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc8d80770 a2=0 a3=0 items=0 ppid=2997 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:40.024050 kernel: audit: type=1300 audit(1769492380.014:530): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc8d80770 a2=0 a3=0 items=0 ppid=2997 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:40.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:40.049000 audit[3286]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:40.049000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffec3604360 a2=0 a3=7ffec360434c items=0 ppid=2997 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:40.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:40.053000 audit[3286]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:40.053000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec3604360 a2=0 a3=0 items=0 ppid=2997 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:40.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:41.652000 audit[3288]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:41.652000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd55774170 a2=0 a3=7ffd5577415c items=0 ppid=2997 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:41.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:41.655000 audit[3288]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:41.655000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd55774170 a2=0 a3=0 items=0 ppid=2997 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:41.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:42.042000 audit[3290]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:42.042000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe854b5960 a2=0 a3=7ffe854b594c items=0 ppid=2997 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:42.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:42.047000 audit[3290]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:42.047000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe854b5960 a2=0 a3=0 items=0 ppid=2997 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:42.047000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:43.066000 audit[3292]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:43.066000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe90732b90 a2=0 a3=7ffe90732b7c items=0 ppid=2997 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.066000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:43.071000 audit[3292]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:43.071000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe90732b90 a2=0 a3=0 items=0 ppid=2997 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.071000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:43.220364 systemd[1]: Created slice kubepods-besteffort-pod1c428c76_532b_4fe7_86d2_00187bd435e2.slice - libcontainer container kubepods-besteffort-pod1c428c76_532b_4fe7_86d2_00187bd435e2.slice. Jan 27 05:39:43.313819 kubelet[2895]: I0127 05:39:43.313780 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1c428c76-532b-4fe7-86d2-00187bd435e2-typha-certs\") pod \"calico-typha-5bc678c897-z57cs\" (UID: \"1c428c76-532b-4fe7-86d2-00187bd435e2\") " pod="calico-system/calico-typha-5bc678c897-z57cs" Jan 27 05:39:43.314386 kubelet[2895]: I0127 05:39:43.314288 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67ktr\" (UniqueName: \"kubernetes.io/projected/1c428c76-532b-4fe7-86d2-00187bd435e2-kube-api-access-67ktr\") pod \"calico-typha-5bc678c897-z57cs\" (UID: \"1c428c76-532b-4fe7-86d2-00187bd435e2\") " pod="calico-system/calico-typha-5bc678c897-z57cs" Jan 27 05:39:43.314386 kubelet[2895]: I0127 05:39:43.314361 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c428c76-532b-4fe7-86d2-00187bd435e2-tigera-ca-bundle\") pod \"calico-typha-5bc678c897-z57cs\" (UID: \"1c428c76-532b-4fe7-86d2-00187bd435e2\") " pod="calico-system/calico-typha-5bc678c897-z57cs" Jan 27 05:39:43.439599 systemd[1]: Created slice kubepods-besteffort-podf44871c1_2c70_43d9_a00e_4017cee24529.slice - libcontainer container kubepods-besteffort-podf44871c1_2c70_43d9_a00e_4017cee24529.slice. Jan 27 05:39:43.515758 kubelet[2895]: I0127 05:39:43.515728 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f44871c1-2c70-43d9-a00e-4017cee24529-cni-net-dir\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.515971 kubelet[2895]: I0127 05:39:43.515961 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f44871c1-2c70-43d9-a00e-4017cee24529-cni-log-dir\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.516084 kubelet[2895]: I0127 05:39:43.516076 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f44871c1-2c70-43d9-a00e-4017cee24529-lib-modules\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.516173 kubelet[2895]: I0127 05:39:43.516155 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f44871c1-2c70-43d9-a00e-4017cee24529-var-run-calico\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.516232 kubelet[2895]: I0127 05:39:43.516224 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7gm\" (UniqueName: \"kubernetes.io/projected/f44871c1-2c70-43d9-a00e-4017cee24529-kube-api-access-2b7gm\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.516343 kubelet[2895]: I0127 05:39:43.516288 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f44871c1-2c70-43d9-a00e-4017cee24529-cni-bin-dir\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.516343 kubelet[2895]: I0127 05:39:43.516305 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f44871c1-2c70-43d9-a00e-4017cee24529-flexvol-driver-host\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.516343 kubelet[2895]: I0127 05:39:43.516321 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f44871c1-2c70-43d9-a00e-4017cee24529-policysync\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.516489 kubelet[2895]: I0127 05:39:43.516443 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f44871c1-2c70-43d9-a00e-4017cee24529-tigera-ca-bundle\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.516489 kubelet[2895]: I0127 05:39:43.516465 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f44871c1-2c70-43d9-a00e-4017cee24529-node-certs\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.516489 kubelet[2895]: I0127 05:39:43.516478 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f44871c1-2c70-43d9-a00e-4017cee24529-xtables-lock\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.516607 kubelet[2895]: I0127 05:39:43.516599 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f44871c1-2c70-43d9-a00e-4017cee24529-var-lib-calico\") pod \"calico-node-f7dmm\" (UID: \"f44871c1-2c70-43d9-a00e-4017cee24529\") " pod="calico-system/calico-node-f7dmm" Jan 27 05:39:43.528345 containerd[1681]: time="2026-01-27T05:39:43.528314903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc678c897-z57cs,Uid:1c428c76-532b-4fe7-86d2-00187bd435e2,Namespace:calico-system,Attempt:0,}" Jan 27 05:39:43.556906 containerd[1681]: time="2026-01-27T05:39:43.556793283Z" level=info msg="connecting to shim 77227cd4f0e1a21037de75d1ce9d316b406abca8ceec36fd671c60742031d9b4" address="unix:///run/containerd/s/e903471b15ae1caafb00a6a2ac930451741b28c5a6be4f6b0d93e2106671f88f" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:39:43.590304 systemd[1]: Started cri-containerd-77227cd4f0e1a21037de75d1ce9d316b406abca8ceec36fd671c60742031d9b4.scope - libcontainer container 77227cd4f0e1a21037de75d1ce9d316b406abca8ceec36fd671c60742031d9b4. Jan 27 05:39:43.611000 audit: BPF prog-id=154 op=LOAD Jan 27 05:39:43.611000 audit: BPF prog-id=155 op=LOAD Jan 27 05:39:43.611000 audit[3315]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3304 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737323237636434663065316132313033376465373564316365396433 Jan 27 05:39:43.611000 audit: BPF prog-id=155 op=UNLOAD Jan 27 05:39:43.611000 audit[3315]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737323237636434663065316132313033376465373564316365396433 Jan 27 05:39:43.611000 audit: BPF prog-id=156 op=LOAD Jan 27 05:39:43.611000 audit[3315]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3304 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737323237636434663065316132313033376465373564316365396433 Jan 27 05:39:43.612000 audit: BPF prog-id=157 op=LOAD Jan 27 05:39:43.612000 audit[3315]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3304 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737323237636434663065316132313033376465373564316365396433 Jan 27 05:39:43.612000 audit: BPF prog-id=157 op=UNLOAD Jan 27 05:39:43.612000 audit[3315]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737323237636434663065316132313033376465373564316365396433 Jan 27 05:39:43.612000 audit: BPF prog-id=156 op=UNLOAD Jan 27 05:39:43.612000 audit[3315]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737323237636434663065316132313033376465373564316365396433 Jan 27 05:39:43.612000 audit: BPF prog-id=158 op=LOAD Jan 27 05:39:43.612000 audit[3315]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3304 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737323237636434663065316132313033376465373564316365396433 Jan 27 05:39:43.616409 kubelet[2895]: E0127 05:39:43.616377 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:39:43.619965 kubelet[2895]: E0127 05:39:43.619473 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.619965 kubelet[2895]: W0127 05:39:43.619492 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.619965 kubelet[2895]: E0127 05:39:43.619519 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.629177 kubelet[2895]: E0127 05:39:43.629157 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.629177 kubelet[2895]: W0127 05:39:43.629174 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.629301 kubelet[2895]: E0127 05:39:43.629280 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.637679 kubelet[2895]: E0127 05:39:43.637596 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.637679 kubelet[2895]: W0127 05:39:43.637621 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.637679 kubelet[2895]: E0127 05:39:43.637641 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.669363 containerd[1681]: time="2026-01-27T05:39:43.669185450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc678c897-z57cs,Uid:1c428c76-532b-4fe7-86d2-00187bd435e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"77227cd4f0e1a21037de75d1ce9d316b406abca8ceec36fd671c60742031d9b4\"" Jan 27 05:39:43.670624 containerd[1681]: time="2026-01-27T05:39:43.670600903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 27 05:39:43.702116 kubelet[2895]: E0127 05:39:43.701504 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.703024 kubelet[2895]: W0127 05:39:43.702237 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.703024 kubelet[2895]: E0127 05:39:43.702299 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.703528 kubelet[2895]: E0127 05:39:43.703388 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.703528 kubelet[2895]: W0127 05:39:43.703401 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.703528 kubelet[2895]: E0127 05:39:43.703413 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.704198 kubelet[2895]: E0127 05:39:43.704139 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.704198 kubelet[2895]: W0127 05:39:43.704151 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.704198 kubelet[2895]: E0127 05:39:43.704166 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.705121 kubelet[2895]: E0127 05:39:43.704916 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.705121 kubelet[2895]: W0127 05:39:43.704984 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.705121 kubelet[2895]: E0127 05:39:43.704996 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.705554 kubelet[2895]: E0127 05:39:43.705545 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.705843 kubelet[2895]: W0127 05:39:43.705678 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.705843 kubelet[2895]: E0127 05:39:43.705691 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.706281 kubelet[2895]: E0127 05:39:43.706217 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.706281 kubelet[2895]: W0127 05:39:43.706229 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.706281 kubelet[2895]: E0127 05:39:43.706238 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.707862 kubelet[2895]: E0127 05:39:43.707820 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.707862 kubelet[2895]: W0127 05:39:43.707831 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.707862 kubelet[2895]: E0127 05:39:43.707844 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.708219 kubelet[2895]: E0127 05:39:43.708183 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.708219 kubelet[2895]: W0127 05:39:43.708192 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.708219 kubelet[2895]: E0127 05:39:43.708200 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.708496 kubelet[2895]: E0127 05:39:43.708456 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.708496 kubelet[2895]: W0127 05:39:43.708464 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.708496 kubelet[2895]: E0127 05:39:43.708471 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.708759 kubelet[2895]: E0127 05:39:43.708713 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.708759 kubelet[2895]: W0127 05:39:43.708721 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.708759 kubelet[2895]: E0127 05:39:43.708728 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.708998 kubelet[2895]: E0127 05:39:43.708953 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.708998 kubelet[2895]: W0127 05:39:43.708961 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.708998 kubelet[2895]: E0127 05:39:43.708969 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.709329 kubelet[2895]: E0127 05:39:43.709278 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.709329 kubelet[2895]: W0127 05:39:43.709286 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.709329 kubelet[2895]: E0127 05:39:43.709293 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.709774 kubelet[2895]: E0127 05:39:43.709763 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.710049 kubelet[2895]: W0127 05:39:43.709997 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.710049 kubelet[2895]: E0127 05:39:43.710012 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.710685 kubelet[2895]: E0127 05:39:43.710655 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.710685 kubelet[2895]: W0127 05:39:43.710672 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.710841 kubelet[2895]: E0127 05:39:43.710756 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.711045 kubelet[2895]: E0127 05:39:43.711022 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.711446 kubelet[2895]: W0127 05:39:43.711030 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.711446 kubelet[2895]: E0127 05:39:43.711416 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.712001 kubelet[2895]: E0127 05:39:43.711796 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.712001 kubelet[2895]: W0127 05:39:43.711805 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.712001 kubelet[2895]: E0127 05:39:43.711814 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.712560 kubelet[2895]: E0127 05:39:43.712541 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.712696 kubelet[2895]: W0127 05:39:43.712605 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.712696 kubelet[2895]: E0127 05:39:43.712617 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.713047 kubelet[2895]: E0127 05:39:43.712886 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.713047 kubelet[2895]: W0127 05:39:43.712993 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.713047 kubelet[2895]: E0127 05:39:43.713003 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.713466 kubelet[2895]: E0127 05:39:43.713250 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.713466 kubelet[2895]: W0127 05:39:43.713258 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.713466 kubelet[2895]: E0127 05:39:43.713266 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.713850 kubelet[2895]: E0127 05:39:43.713685 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.713850 kubelet[2895]: W0127 05:39:43.713706 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.713850 kubelet[2895]: E0127 05:39:43.713714 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.718070 kubelet[2895]: E0127 05:39:43.718058 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.718222 kubelet[2895]: W0127 05:39:43.718141 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.718222 kubelet[2895]: E0127 05:39:43.718155 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.718222 kubelet[2895]: I0127 05:39:43.718182 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7ea10135-90f4-4815-b58a-eefd271d18ce-varrun\") pod \"csi-node-driver-mf6bj\" (UID: \"7ea10135-90f4-4815-b58a-eefd271d18ce\") " pod="calico-system/csi-node-driver-mf6bj" Jan 27 05:39:43.718553 kubelet[2895]: E0127 05:39:43.718478 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.718553 kubelet[2895]: W0127 05:39:43.718489 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.718553 kubelet[2895]: E0127 05:39:43.718501 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.718553 kubelet[2895]: I0127 05:39:43.718519 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7ea10135-90f4-4815-b58a-eefd271d18ce-socket-dir\") pod \"csi-node-driver-mf6bj\" (UID: \"7ea10135-90f4-4815-b58a-eefd271d18ce\") " pod="calico-system/csi-node-driver-mf6bj" Jan 27 05:39:43.718832 kubelet[2895]: E0127 05:39:43.718760 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.718832 kubelet[2895]: W0127 05:39:43.718768 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.718832 kubelet[2895]: E0127 05:39:43.718782 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.718832 kubelet[2895]: I0127 05:39:43.718795 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7ea10135-90f4-4815-b58a-eefd271d18ce-registration-dir\") pod \"csi-node-driver-mf6bj\" (UID: \"7ea10135-90f4-4815-b58a-eefd271d18ce\") " pod="calico-system/csi-node-driver-mf6bj" Jan 27 05:39:43.719063 kubelet[2895]: E0127 05:39:43.719025 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.719104 kubelet[2895]: W0127 05:39:43.719097 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.719195 kubelet[2895]: E0127 05:39:43.719148 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.719195 kubelet[2895]: I0127 05:39:43.719162 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ea10135-90f4-4815-b58a-eefd271d18ce-kubelet-dir\") pod \"csi-node-driver-mf6bj\" (UID: \"7ea10135-90f4-4815-b58a-eefd271d18ce\") " pod="calico-system/csi-node-driver-mf6bj" Jan 27 05:39:43.719363 kubelet[2895]: E0127 05:39:43.719346 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.719363 kubelet[2895]: W0127 05:39:43.719354 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.719472 kubelet[2895]: E0127 05:39:43.719420 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.719472 kubelet[2895]: I0127 05:39:43.719434 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nm94\" (UniqueName: \"kubernetes.io/projected/7ea10135-90f4-4815-b58a-eefd271d18ce-kube-api-access-5nm94\") pod \"csi-node-driver-mf6bj\" (UID: \"7ea10135-90f4-4815-b58a-eefd271d18ce\") " pod="calico-system/csi-node-driver-mf6bj" Jan 27 05:39:43.719641 kubelet[2895]: E0127 05:39:43.719634 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.719677 kubelet[2895]: W0127 05:39:43.719671 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.719779 kubelet[2895]: E0127 05:39:43.719770 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.719868 kubelet[2895]: E0127 05:39:43.719835 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.719868 kubelet[2895]: W0127 05:39:43.719841 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.719924 kubelet[2895]: E0127 05:39:43.719862 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.720094 kubelet[2895]: E0127 05:39:43.720059 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.720094 kubelet[2895]: W0127 05:39:43.720066 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.720094 kubelet[2895]: E0127 05:39:43.720080 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.720573 kubelet[2895]: E0127 05:39:43.720287 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.720573 kubelet[2895]: W0127 05:39:43.720528 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.720573 kubelet[2895]: E0127 05:39:43.720559 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.720827 kubelet[2895]: E0127 05:39:43.720786 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.720827 kubelet[2895]: W0127 05:39:43.720794 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.720964 kubelet[2895]: E0127 05:39:43.720815 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.721263 kubelet[2895]: E0127 05:39:43.721212 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.721477 kubelet[2895]: W0127 05:39:43.721405 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.721477 kubelet[2895]: E0127 05:39:43.721419 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.721692 kubelet[2895]: E0127 05:39:43.721680 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.721813 kubelet[2895]: W0127 05:39:43.721762 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.721813 kubelet[2895]: E0127 05:39:43.721773 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.722061 kubelet[2895]: E0127 05:39:43.721988 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.722061 kubelet[2895]: W0127 05:39:43.721995 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.722061 kubelet[2895]: E0127 05:39:43.722004 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.722253 kubelet[2895]: E0127 05:39:43.722238 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.722293 kubelet[2895]: W0127 05:39:43.722287 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.722348 kubelet[2895]: E0127 05:39:43.722342 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.722543 kubelet[2895]: E0127 05:39:43.722511 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.722543 kubelet[2895]: W0127 05:39:43.722525 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.722543 kubelet[2895]: E0127 05:39:43.722532 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.744266 containerd[1681]: time="2026-01-27T05:39:43.744228742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f7dmm,Uid:f44871c1-2c70-43d9-a00e-4017cee24529,Namespace:calico-system,Attempt:0,}" Jan 27 05:39:43.770384 containerd[1681]: time="2026-01-27T05:39:43.770319340Z" level=info msg="connecting to shim 32e2b912650095d4ffacc48ab260b94652cb9fb4244c699ca83a790a25364ef9" address="unix:///run/containerd/s/da21c4e50fd1d33ed3da8c70dee88938029449637702cdf66ecf578249be2273" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:39:43.797256 systemd[1]: Started cri-containerd-32e2b912650095d4ffacc48ab260b94652cb9fb4244c699ca83a790a25364ef9.scope - libcontainer container 32e2b912650095d4ffacc48ab260b94652cb9fb4244c699ca83a790a25364ef9. Jan 27 05:39:43.806000 audit: BPF prog-id=159 op=LOAD Jan 27 05:39:43.809224 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 27 05:39:43.809262 kernel: audit: type=1334 audit(1769492383.806:547): prog-id=159 op=LOAD Jan 27 05:39:43.807000 audit: BPF prog-id=160 op=LOAD Jan 27 05:39:43.811388 kernel: audit: type=1334 audit(1769492383.807:548): prog-id=160 op=LOAD Jan 27 05:39:43.807000 audit[3406]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3394 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.814026 kernel: audit: type=1300 audit(1769492383.807:548): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3394 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653262393132363530303935643466666163633438616232363062 Jan 27 05:39:43.818840 kernel: audit: type=1327 audit(1769492383.807:548): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653262393132363530303935643466666163633438616232363062 Jan 27 05:39:43.820736 kubelet[2895]: E0127 05:39:43.820602 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.820736 kubelet[2895]: W0127 05:39:43.820624 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.820736 kubelet[2895]: E0127 05:39:43.820644 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.821005 kubelet[2895]: E0127 05:39:43.820996 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.821079 kubelet[2895]: W0127 05:39:43.821061 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.807000 audit: BPF prog-id=160 op=UNLOAD Jan 27 05:39:43.821390 kubelet[2895]: E0127 05:39:43.821253 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.821648 kubelet[2895]: E0127 05:39:43.821635 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.831136 kernel: audit: type=1334 audit(1769492383.807:549): prog-id=160 op=UNLOAD Jan 27 05:39:43.831392 kernel: audit: type=1300 audit(1769492383.807:549): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.807000 audit[3406]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.835349 kubelet[2895]: W0127 05:39:43.821676 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.835349 kubelet[2895]: E0127 05:39:43.821687 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.835349 kubelet[2895]: E0127 05:39:43.833965 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.835349 kubelet[2895]: W0127 05:39:43.834013 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.835871 kubelet[2895]: E0127 05:39:43.835851 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653262393132363530303935643466666163633438616232363062 Jan 27 05:39:43.836736 kubelet[2895]: E0127 05:39:43.836488 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.836736 kubelet[2895]: W0127 05:39:43.836666 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.836736 kubelet[2895]: E0127 05:39:43.836683 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.837123 kubelet[2895]: E0127 05:39:43.837081 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.837317 kubelet[2895]: W0127 05:39:43.837170 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.837317 kubelet[2895]: E0127 05:39:43.837183 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.837515 kubelet[2895]: E0127 05:39:43.837496 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.837568 kubelet[2895]: W0127 05:39:43.837561 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.837619 kubelet[2895]: E0127 05:39:43.837613 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.837919 kubelet[2895]: E0127 05:39:43.837902 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.837919 kubelet[2895]: W0127 05:39:43.837910 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.838050 kubelet[2895]: E0127 05:39:43.837974 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.838200 kubelet[2895]: E0127 05:39:43.838180 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.838200 kubelet[2895]: W0127 05:39:43.838191 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.838333 kubelet[2895]: E0127 05:39:43.838241 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.838469 kubelet[2895]: E0127 05:39:43.838455 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.838469 kubelet[2895]: W0127 05:39:43.838462 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.838608 kubelet[2895]: E0127 05:39:43.838600 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.838726 kubelet[2895]: E0127 05:39:43.838711 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.838726 kubelet[2895]: W0127 05:39:43.838718 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.838854 kubelet[2895]: E0127 05:39:43.838815 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.839006 kubelet[2895]: E0127 05:39:43.838972 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.839006 kubelet[2895]: W0127 05:39:43.838978 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.839241 kubelet[2895]: E0127 05:39:43.839184 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.839310 kubelet[2895]: E0127 05:39:43.839305 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.839405 kubelet[2895]: W0127 05:39:43.839352 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.839517 kubelet[2895]: E0127 05:39:43.839488 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.839575 kubelet[2895]: E0127 05:39:43.839570 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.839622 kubelet[2895]: W0127 05:39:43.839598 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.839711 kubelet[2895]: E0127 05:39:43.839652 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.839893 kubelet[2895]: E0127 05:39:43.839838 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.839893 kubelet[2895]: W0127 05:39:43.839846 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.839893 kubelet[2895]: E0127 05:39:43.839854 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.840153 kubelet[2895]: E0127 05:39:43.840146 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.840272 kubelet[2895]: W0127 05:39:43.840192 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.840272 kubelet[2895]: E0127 05:39:43.840200 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.841413 kernel: audit: type=1327 audit(1769492383.807:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653262393132363530303935643466666163633438616232363062 Jan 27 05:39:43.841673 kubelet[2895]: E0127 05:39:43.841632 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.841673 kubelet[2895]: W0127 05:39:43.841640 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.841913 kubelet[2895]: E0127 05:39:43.841871 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.841913 kubelet[2895]: W0127 05:39:43.841878 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.842110 kubelet[2895]: E0127 05:39:43.842080 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.842110 kubelet[2895]: W0127 05:39:43.842086 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.807000 audit: BPF prog-id=161 op=LOAD Jan 27 05:39:43.842400 kubelet[2895]: E0127 05:39:43.842347 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.842400 kubelet[2895]: W0127 05:39:43.842354 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.842400 kubelet[2895]: E0127 05:39:43.842362 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.842400 kubelet[2895]: E0127 05:39:43.842389 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.842555 kubelet[2895]: E0127 05:39:43.842505 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.842715 kubelet[2895]: E0127 05:39:43.842659 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.842715 kubelet[2895]: W0127 05:39:43.842666 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.842715 kubelet[2895]: E0127 05:39:43.842673 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.842954 kubelet[2895]: E0127 05:39:43.842896 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.842954 kubelet[2895]: W0127 05:39:43.842904 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.842954 kubelet[2895]: E0127 05:39:43.842910 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.843270 kubelet[2895]: E0127 05:39:43.843196 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.843270 kubelet[2895]: W0127 05:39:43.843204 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.843270 kubelet[2895]: E0127 05:39:43.843210 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.843270 kubelet[2895]: E0127 05:39:43.843224 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.843506 kubelet[2895]: E0127 05:39:43.843447 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.843506 kubelet[2895]: W0127 05:39:43.843454 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.843506 kubelet[2895]: E0127 05:39:43.843460 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.843755 kubelet[2895]: E0127 05:39:43.843700 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.843755 kubelet[2895]: W0127 05:39:43.843708 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.843755 kubelet[2895]: E0127 05:39:43.843714 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:43.844304 kernel: audit: type=1334 audit(1769492383.807:550): prog-id=161 op=LOAD Jan 27 05:39:43.844417 kernel: audit: type=1300 audit(1769492383.807:550): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3394 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.807000 audit[3406]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3394 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653262393132363530303935643466666163633438616232363062 Jan 27 05:39:43.855045 containerd[1681]: time="2026-01-27T05:39:43.854890658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f7dmm,Uid:f44871c1-2c70-43d9-a00e-4017cee24529,Namespace:calico-system,Attempt:0,} returns sandbox id \"32e2b912650095d4ffacc48ab260b94652cb9fb4244c699ca83a790a25364ef9\"" Jan 27 05:39:43.807000 audit: BPF prog-id=162 op=LOAD Jan 27 05:39:43.807000 audit[3406]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3394 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653262393132363530303935643466666163633438616232363062 Jan 27 05:39:43.807000 audit: BPF prog-id=162 op=UNLOAD Jan 27 05:39:43.807000 audit[3406]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653262393132363530303935643466666163633438616232363062 Jan 27 05:39:43.807000 audit: BPF prog-id=161 op=UNLOAD Jan 27 05:39:43.807000 audit[3406]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653262393132363530303935643466666163633438616232363062 Jan 27 05:39:43.807000 audit: BPF prog-id=163 op=LOAD Jan 27 05:39:43.807000 audit[3406]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3394 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:43.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653262393132363530303935643466666163633438616232363062 Jan 27 05:39:43.857184 kernel: audit: type=1327 audit(1769492383.807:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653262393132363530303935643466666163633438616232363062 Jan 27 05:39:43.858750 kubelet[2895]: E0127 05:39:43.858687 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:43.858750 kubelet[2895]: W0127 05:39:43.858707 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:43.858750 kubelet[2895]: E0127 05:39:43.858725 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:44.082000 audit[3457]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3457 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:44.082000 audit[3457]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcaee39ba0 a2=0 a3=7ffcaee39b8c items=0 ppid=2997 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:44.082000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:44.086000 audit[3457]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3457 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:39:44.086000 audit[3457]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcaee39ba0 a2=0 a3=0 items=0 ppid=2997 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:44.086000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:39:45.018844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674382014.mount: Deactivated successfully. Jan 27 05:39:45.503291 containerd[1681]: time="2026-01-27T05:39:45.503172626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:45.504862 containerd[1681]: time="2026-01-27T05:39:45.504706779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Jan 27 05:39:45.506372 kubelet[2895]: E0127 05:39:45.505908 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:39:45.507496 containerd[1681]: time="2026-01-27T05:39:45.507409297Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:45.513055 containerd[1681]: time="2026-01-27T05:39:45.513000771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:45.513984 containerd[1681]: time="2026-01-27T05:39:45.513959429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.843333789s" Jan 27 05:39:45.513984 containerd[1681]: time="2026-01-27T05:39:45.513983743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 27 05:39:45.516538 containerd[1681]: time="2026-01-27T05:39:45.516455652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 27 05:39:45.532113 containerd[1681]: time="2026-01-27T05:39:45.532068924Z" level=info msg="CreateContainer within sandbox \"77227cd4f0e1a21037de75d1ce9d316b406abca8ceec36fd671c60742031d9b4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 27 05:39:45.547304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844537488.mount: Deactivated successfully. Jan 27 05:39:45.549809 containerd[1681]: time="2026-01-27T05:39:45.548986602Z" level=info msg="Container 497aa32ba58419da0b5ad52bd4816361b58053ad05b5f0a2d6b53a38b7197776: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:39:45.558908 containerd[1681]: time="2026-01-27T05:39:45.558879554Z" level=info msg="CreateContainer within sandbox \"77227cd4f0e1a21037de75d1ce9d316b406abca8ceec36fd671c60742031d9b4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"497aa32ba58419da0b5ad52bd4816361b58053ad05b5f0a2d6b53a38b7197776\"" Jan 27 05:39:45.560111 containerd[1681]: time="2026-01-27T05:39:45.560092225Z" level=info msg="StartContainer for \"497aa32ba58419da0b5ad52bd4816361b58053ad05b5f0a2d6b53a38b7197776\"" Jan 27 05:39:45.561288 containerd[1681]: time="2026-01-27T05:39:45.561256629Z" level=info msg="connecting to shim 497aa32ba58419da0b5ad52bd4816361b58053ad05b5f0a2d6b53a38b7197776" address="unix:///run/containerd/s/e903471b15ae1caafb00a6a2ac930451741b28c5a6be4f6b0d93e2106671f88f" protocol=ttrpc version=3 Jan 27 05:39:45.588283 systemd[1]: Started cri-containerd-497aa32ba58419da0b5ad52bd4816361b58053ad05b5f0a2d6b53a38b7197776.scope - libcontainer container 497aa32ba58419da0b5ad52bd4816361b58053ad05b5f0a2d6b53a38b7197776. Jan 27 05:39:45.599000 audit: BPF prog-id=164 op=LOAD Jan 27 05:39:45.600000 audit: BPF prog-id=165 op=LOAD Jan 27 05:39:45.600000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3304 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:45.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439376161333262613538343139646130623561643532626434383136 Jan 27 05:39:45.600000 audit: BPF prog-id=165 op=UNLOAD Jan 27 05:39:45.600000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:45.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439376161333262613538343139646130623561643532626434383136 Jan 27 05:39:45.600000 audit: BPF prog-id=166 op=LOAD Jan 27 05:39:45.600000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3304 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:45.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439376161333262613538343139646130623561643532626434383136 Jan 27 05:39:45.600000 audit: BPF prog-id=167 op=LOAD Jan 27 05:39:45.600000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3304 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:45.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439376161333262613538343139646130623561643532626434383136 Jan 27 05:39:45.600000 audit: BPF prog-id=167 op=UNLOAD Jan 27 05:39:45.600000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:45.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439376161333262613538343139646130623561643532626434383136 Jan 27 05:39:45.600000 audit: BPF prog-id=166 op=UNLOAD Jan 27 05:39:45.600000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:45.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439376161333262613538343139646130623561643532626434383136 Jan 27 05:39:45.600000 audit: BPF prog-id=168 op=LOAD Jan 27 05:39:45.600000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3304 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:45.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439376161333262613538343139646130623561643532626434383136 Jan 27 05:39:45.647042 containerd[1681]: time="2026-01-27T05:39:45.646985090Z" level=info msg="StartContainer for \"497aa32ba58419da0b5ad52bd4816361b58053ad05b5f0a2d6b53a38b7197776\" returns successfully" Jan 27 05:39:46.609598 kubelet[2895]: I0127 05:39:46.609374 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bc678c897-z57cs" podStartSLOduration=1.765087545 podStartE2EDuration="3.609296939s" podCreationTimestamp="2026-01-27 05:39:43 +0000 UTC" firstStartedPulling="2026-01-27 05:39:43.670371922 +0000 UTC m=+18.264870453" lastFinishedPulling="2026-01-27 05:39:45.514581316 +0000 UTC m=+20.109079847" observedRunningTime="2026-01-27 05:39:46.608510653 +0000 UTC m=+21.203009258" watchObservedRunningTime="2026-01-27 05:39:46.609296939 +0000 UTC m=+21.203795504" Jan 27 05:39:46.632183 kubelet[2895]: E0127 05:39:46.632133 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.632183 kubelet[2895]: W0127 05:39:46.632155 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.632183 kubelet[2895]: E0127 05:39:46.632176 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.632408 kubelet[2895]: E0127 05:39:46.632340 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.632408 kubelet[2895]: W0127 05:39:46.632347 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.632408 kubelet[2895]: E0127 05:39:46.632356 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.632505 kubelet[2895]: E0127 05:39:46.632495 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.632527 kubelet[2895]: W0127 05:39:46.632506 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.632527 kubelet[2895]: E0127 05:39:46.632514 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.632706 kubelet[2895]: E0127 05:39:46.632696 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.632737 kubelet[2895]: W0127 05:39:46.632706 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.632737 kubelet[2895]: E0127 05:39:46.632714 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.632874 kubelet[2895]: E0127 05:39:46.632866 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.632896 kubelet[2895]: W0127 05:39:46.632874 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.632896 kubelet[2895]: E0127 05:39:46.632882 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.633010 kubelet[2895]: E0127 05:39:46.633001 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.633100 kubelet[2895]: W0127 05:39:46.633010 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.633100 kubelet[2895]: E0127 05:39:46.633017 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.633174 kubelet[2895]: E0127 05:39:46.633161 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.633174 kubelet[2895]: W0127 05:39:46.633170 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.633232 kubelet[2895]: E0127 05:39:46.633177 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.633312 kubelet[2895]: E0127 05:39:46.633302 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.633339 kubelet[2895]: W0127 05:39:46.633312 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.633339 kubelet[2895]: E0127 05:39:46.633319 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.633456 kubelet[2895]: E0127 05:39:46.633445 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.633456 kubelet[2895]: W0127 05:39:46.633453 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.633511 kubelet[2895]: E0127 05:39:46.633460 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.633594 kubelet[2895]: E0127 05:39:46.633585 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.633621 kubelet[2895]: W0127 05:39:46.633593 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.633621 kubelet[2895]: E0127 05:39:46.633600 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.633737 kubelet[2895]: E0127 05:39:46.633728 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.633762 kubelet[2895]: W0127 05:39:46.633737 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.633762 kubelet[2895]: E0127 05:39:46.633744 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.633872 kubelet[2895]: E0127 05:39:46.633864 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.633899 kubelet[2895]: W0127 05:39:46.633872 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.633899 kubelet[2895]: E0127 05:39:46.633879 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.634010 kubelet[2895]: E0127 05:39:46.634000 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.634010 kubelet[2895]: W0127 05:39:46.634008 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.634073 kubelet[2895]: E0127 05:39:46.634015 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.634154 kubelet[2895]: E0127 05:39:46.634145 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.634182 kubelet[2895]: W0127 05:39:46.634153 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.634182 kubelet[2895]: E0127 05:39:46.634161 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.634291 kubelet[2895]: E0127 05:39:46.634276 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.634291 kubelet[2895]: W0127 05:39:46.634284 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.634333 kubelet[2895]: E0127 05:39:46.634291 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.666947 kubelet[2895]: E0127 05:39:46.666900 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.667415 kubelet[2895]: W0127 05:39:46.667213 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.667415 kubelet[2895]: E0127 05:39:46.667252 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.667655 kubelet[2895]: E0127 05:39:46.667628 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.667655 kubelet[2895]: W0127 05:39:46.667640 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.667793 kubelet[2895]: E0127 05:39:46.667742 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.668334 kubelet[2895]: E0127 05:39:46.668213 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.668334 kubelet[2895]: W0127 05:39:46.668227 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.668334 kubelet[2895]: E0127 05:39:46.668246 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.668687 kubelet[2895]: E0127 05:39:46.668612 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.668687 kubelet[2895]: W0127 05:39:46.668623 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.668687 kubelet[2895]: E0127 05:39:46.668639 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.669131 kubelet[2895]: E0127 05:39:46.669069 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.669131 kubelet[2895]: W0127 05:39:46.669080 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.669131 kubelet[2895]: E0127 05:39:46.669112 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.669471 kubelet[2895]: E0127 05:39:46.669419 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.669471 kubelet[2895]: W0127 05:39:46.669429 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.669471 kubelet[2895]: E0127 05:39:46.669452 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.670795 kubelet[2895]: E0127 05:39:46.670665 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.670795 kubelet[2895]: W0127 05:39:46.670700 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.670795 kubelet[2895]: E0127 05:39:46.670736 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.671335 kubelet[2895]: E0127 05:39:46.671230 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.671335 kubelet[2895]: W0127 05:39:46.671242 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.671335 kubelet[2895]: E0127 05:39:46.671266 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.671569 kubelet[2895]: E0127 05:39:46.671514 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.671569 kubelet[2895]: W0127 05:39:46.671525 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.671569 kubelet[2895]: E0127 05:39:46.671547 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.671921 kubelet[2895]: E0127 05:39:46.671840 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.671921 kubelet[2895]: W0127 05:39:46.671850 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.671921 kubelet[2895]: E0127 05:39:46.671869 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.672210 kubelet[2895]: E0127 05:39:46.672201 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.672304 kubelet[2895]: W0127 05:39:46.672256 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.672304 kubelet[2895]: E0127 05:39:46.672275 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.672591 kubelet[2895]: E0127 05:39:46.672548 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.672591 kubelet[2895]: W0127 05:39:46.672568 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.672665 kubelet[2895]: E0127 05:39:46.672600 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.673667 kubelet[2895]: E0127 05:39:46.673651 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.673667 kubelet[2895]: W0127 05:39:46.673663 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.673806 kubelet[2895]: E0127 05:39:46.673770 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.673875 kubelet[2895]: E0127 05:39:46.673817 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.673875 kubelet[2895]: W0127 05:39:46.673823 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.673875 kubelet[2895]: E0127 05:39:46.673837 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.673967 kubelet[2895]: E0127 05:39:46.673959 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.673967 kubelet[2895]: W0127 05:39:46.673966 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.674008 kubelet[2895]: E0127 05:39:46.673978 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.674291 kubelet[2895]: E0127 05:39:46.674270 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.674291 kubelet[2895]: W0127 05:39:46.674280 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.674419 kubelet[2895]: E0127 05:39:46.674362 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.674889 kubelet[2895]: E0127 05:39:46.674796 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.674889 kubelet[2895]: W0127 05:39:46.674804 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.674889 kubelet[2895]: E0127 05:39:46.674818 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.675459 kubelet[2895]: E0127 05:39:46.675446 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:39:46.675539 kubelet[2895]: W0127 05:39:46.675512 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:39:46.675539 kubelet[2895]: E0127 05:39:46.675523 2895 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:39:46.817055 containerd[1681]: time="2026-01-27T05:39:46.816696708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:46.818790 containerd[1681]: time="2026-01-27T05:39:46.818767163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 27 05:39:46.820080 containerd[1681]: time="2026-01-27T05:39:46.820062681Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:46.822987 containerd[1681]: time="2026-01-27T05:39:46.822734372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:46.823218 containerd[1681]: time="2026-01-27T05:39:46.822952121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.306243523s" Jan 27 05:39:46.823218 containerd[1681]: time="2026-01-27T05:39:46.823140638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 27 05:39:46.826655 containerd[1681]: time="2026-01-27T05:39:46.826635298Z" level=info msg="CreateContainer within sandbox \"32e2b912650095d4ffacc48ab260b94652cb9fb4244c699ca83a790a25364ef9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 27 05:39:46.841581 containerd[1681]: time="2026-01-27T05:39:46.840712243Z" level=info msg="Container 1628185792de7f61625380ac3753591f15a15396f6ccb0a88a57fdc49e2f3b52: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:39:46.849116 containerd[1681]: time="2026-01-27T05:39:46.849080689Z" level=info msg="CreateContainer within sandbox \"32e2b912650095d4ffacc48ab260b94652cb9fb4244c699ca83a790a25364ef9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1628185792de7f61625380ac3753591f15a15396f6ccb0a88a57fdc49e2f3b52\"" Jan 27 05:39:46.849697 containerd[1681]: time="2026-01-27T05:39:46.849676804Z" level=info msg="StartContainer for \"1628185792de7f61625380ac3753591f15a15396f6ccb0a88a57fdc49e2f3b52\"" Jan 27 05:39:46.851010 containerd[1681]: time="2026-01-27T05:39:46.850987450Z" level=info msg="connecting to shim 1628185792de7f61625380ac3753591f15a15396f6ccb0a88a57fdc49e2f3b52" address="unix:///run/containerd/s/da21c4e50fd1d33ed3da8c70dee88938029449637702cdf66ecf578249be2273" protocol=ttrpc version=3 Jan 27 05:39:46.874293 systemd[1]: Started cri-containerd-1628185792de7f61625380ac3753591f15a15396f6ccb0a88a57fdc49e2f3b52.scope - libcontainer container 1628185792de7f61625380ac3753591f15a15396f6ccb0a88a57fdc49e2f3b52. Jan 27 05:39:46.922000 audit: BPF prog-id=169 op=LOAD Jan 27 05:39:46.922000 audit[3542]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3394 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:46.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136323831383537393264653766363136323533383061633337353335 Jan 27 05:39:46.922000 audit: BPF prog-id=170 op=LOAD Jan 27 05:39:46.922000 audit[3542]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3394 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:46.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136323831383537393264653766363136323533383061633337353335 Jan 27 05:39:46.922000 audit: BPF prog-id=170 op=UNLOAD Jan 27 05:39:46.922000 audit[3542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:46.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136323831383537393264653766363136323533383061633337353335 Jan 27 05:39:46.922000 audit: BPF prog-id=169 op=UNLOAD Jan 27 05:39:46.922000 audit[3542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:46.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136323831383537393264653766363136323533383061633337353335 Jan 27 05:39:46.922000 audit: BPF prog-id=171 op=LOAD Jan 27 05:39:46.922000 audit[3542]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3394 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:46.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136323831383537393264653766363136323533383061633337353335 Jan 27 05:39:46.946342 containerd[1681]: time="2026-01-27T05:39:46.946219088Z" level=info msg="StartContainer for \"1628185792de7f61625380ac3753591f15a15396f6ccb0a88a57fdc49e2f3b52\" returns successfully" Jan 27 05:39:46.957692 systemd[1]: cri-containerd-1628185792de7f61625380ac3753591f15a15396f6ccb0a88a57fdc49e2f3b52.scope: Deactivated successfully. Jan 27 05:39:46.961588 containerd[1681]: time="2026-01-27T05:39:46.961553945Z" level=info msg="received container exit event container_id:\"1628185792de7f61625380ac3753591f15a15396f6ccb0a88a57fdc49e2f3b52\" id:\"1628185792de7f61625380ac3753591f15a15396f6ccb0a88a57fdc49e2f3b52\" pid:3555 exited_at:{seconds:1769492386 nanos:960216252}" Jan 27 05:39:46.961000 audit: BPF prog-id=171 op=UNLOAD Jan 27 05:39:46.982428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1628185792de7f61625380ac3753591f15a15396f6ccb0a88a57fdc49e2f3b52-rootfs.mount: Deactivated successfully. Jan 27 05:39:47.505244 kubelet[2895]: E0127 05:39:47.505200 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:39:47.598786 kubelet[2895]: I0127 05:39:47.598760 2895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 05:39:49.505294 kubelet[2895]: E0127 05:39:49.505235 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:39:49.605081 containerd[1681]: time="2026-01-27T05:39:49.604685577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 27 05:39:51.504541 kubelet[2895]: E0127 05:39:51.504506 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:39:52.156089 containerd[1681]: time="2026-01-27T05:39:52.155999380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:52.158602 containerd[1681]: time="2026-01-27T05:39:52.158552324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 27 05:39:52.159851 containerd[1681]: time="2026-01-27T05:39:52.159819503Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:52.162430 containerd[1681]: time="2026-01-27T05:39:52.162390174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:39:52.163232 containerd[1681]: time="2026-01-27T05:39:52.163209252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.558405049s" Jan 27 05:39:52.163272 containerd[1681]: time="2026-01-27T05:39:52.163231586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 27 05:39:52.164995 containerd[1681]: time="2026-01-27T05:39:52.164976193Z" level=info msg="CreateContainer within sandbox \"32e2b912650095d4ffacc48ab260b94652cb9fb4244c699ca83a790a25364ef9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 27 05:39:52.178651 containerd[1681]: time="2026-01-27T05:39:52.175294234Z" level=info msg="Container 0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:39:52.190340 containerd[1681]: time="2026-01-27T05:39:52.190296685Z" level=info msg="CreateContainer within sandbox \"32e2b912650095d4ffacc48ab260b94652cb9fb4244c699ca83a790a25364ef9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f\"" Jan 27 05:39:52.190811 containerd[1681]: time="2026-01-27T05:39:52.190793389Z" level=info msg="StartContainer for \"0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f\"" Jan 27 05:39:52.193152 containerd[1681]: time="2026-01-27T05:39:52.193113736Z" level=info msg="connecting to shim 0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f" address="unix:///run/containerd/s/da21c4e50fd1d33ed3da8c70dee88938029449637702cdf66ecf578249be2273" protocol=ttrpc version=3 Jan 27 05:39:52.217214 systemd[1]: Started cri-containerd-0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f.scope - libcontainer container 0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f. Jan 27 05:39:52.261000 audit: BPF prog-id=172 op=LOAD Jan 27 05:39:52.263453 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 27 05:39:52.263526 kernel: audit: type=1334 audit(1769492392.261:571): prog-id=172 op=LOAD Jan 27 05:39:52.261000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3394 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:52.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065636666343465353261393039356164323533306431363630303337 Jan 27 05:39:52.271281 kernel: audit: type=1300 audit(1769492392.261:571): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3394 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:52.271400 kernel: audit: type=1327 audit(1769492392.261:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065636666343465353261393039356164323533306431363630303337 Jan 27 05:39:52.263000 audit: BPF prog-id=173 op=LOAD Jan 27 05:39:52.274574 kernel: audit: type=1334 audit(1769492392.263:572): prog-id=173 op=LOAD Jan 27 05:39:52.263000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3394 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:52.277226 kernel: audit: type=1300 audit(1769492392.263:572): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3394 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:52.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065636666343465353261393039356164323533306431363630303337 Jan 27 05:39:52.281341 kernel: audit: type=1327 audit(1769492392.263:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065636666343465353261393039356164323533306431363630303337 Jan 27 05:39:52.263000 audit: BPF prog-id=173 op=UNLOAD Jan 27 05:39:52.289804 kernel: audit: type=1334 audit(1769492392.263:573): prog-id=173 op=UNLOAD Jan 27 05:39:52.289891 kernel: audit: type=1300 audit(1769492392.263:573): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:52.263000 audit[3600]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:52.293494 kernel: audit: type=1327 audit(1769492392.263:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065636666343465353261393039356164323533306431363630303337 Jan 27 05:39:52.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065636666343465353261393039356164323533306431363630303337 Jan 27 05:39:52.263000 audit: BPF prog-id=172 op=UNLOAD Jan 27 05:39:52.295156 kernel: audit: type=1334 audit(1769492392.263:574): prog-id=172 op=UNLOAD Jan 27 05:39:52.263000 audit[3600]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:52.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065636666343465353261393039356164323533306431363630303337 Jan 27 05:39:52.263000 audit: BPF prog-id=174 op=LOAD Jan 27 05:39:52.263000 audit[3600]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3394 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:52.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065636666343465353261393039356164323533306431363630303337 Jan 27 05:39:52.305941 containerd[1681]: time="2026-01-27T05:39:52.305910376Z" level=info msg="StartContainer for \"0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f\" returns successfully" Jan 27 05:39:53.504678 kubelet[2895]: E0127 05:39:53.504645 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:39:53.611137 containerd[1681]: time="2026-01-27T05:39:53.611084645Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 27 05:39:53.613496 systemd[1]: cri-containerd-0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f.scope: Deactivated successfully. Jan 27 05:39:53.614104 systemd[1]: cri-containerd-0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f.scope: Consumed 444ms CPU time, 193.1M memory peak, 171.3M written to disk. Jan 27 05:39:53.616407 containerd[1681]: time="2026-01-27T05:39:53.616303152Z" level=info msg="received container exit event container_id:\"0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f\" id:\"0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f\" pid:3614 exited_at:{seconds:1769492393 nanos:616010094}" Jan 27 05:39:53.617000 audit: BPF prog-id=174 op=UNLOAD Jan 27 05:39:53.638633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ecff44e52a9095ad2530d16600375fc97f9977b19ceb19a2687d49ca436813f-rootfs.mount: Deactivated successfully. Jan 27 05:39:53.697334 kubelet[2895]: I0127 05:39:53.697269 2895 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 27 05:39:53.738972 systemd[1]: Created slice kubepods-burstable-pod0babb8ec_614b_4d27_a5c4_cb80e7016a50.slice - libcontainer container kubepods-burstable-pod0babb8ec_614b_4d27_a5c4_cb80e7016a50.slice. Jan 27 05:39:53.750611 systemd[1]: Created slice kubepods-besteffort-pod94542ac6_892a_4ec1_b71f_6197c63a4798.slice - libcontainer container kubepods-besteffort-pod94542ac6_892a_4ec1_b71f_6197c63a4798.slice. Jan 27 05:39:53.756327 systemd[1]: Created slice kubepods-besteffort-pod7997e895_ab0d_47da_83eb_264fa47d7c87.slice - libcontainer container kubepods-besteffort-pod7997e895_ab0d_47da_83eb_264fa47d7c87.slice. Jan 27 05:39:53.767056 systemd[1]: Created slice kubepods-besteffort-podcf580b0a_7ab1_4b43_ad9f_7219ad766e09.slice - libcontainer container kubepods-besteffort-podcf580b0a_7ab1_4b43_ad9f_7219ad766e09.slice. Jan 27 05:39:53.775804 systemd[1]: Created slice kubepods-besteffort-pod22258eaf_cd76_4bd2_ad47_8f4a85b664bd.slice - libcontainer container kubepods-besteffort-pod22258eaf_cd76_4bd2_ad47_8f4a85b664bd.slice. Jan 27 05:39:53.782361 systemd[1]: Created slice kubepods-burstable-pode28c560f_1255_4106_9071_95e1b99b107b.slice - libcontainer container kubepods-burstable-pode28c560f_1255_4106_9071_95e1b99b107b.slice. Jan 27 05:39:53.787595 systemd[1]: Created slice kubepods-besteffort-pod2a3676d6_dbc6_4326_8dba_e3375f935a86.slice - libcontainer container kubepods-besteffort-pod2a3676d6_dbc6_4326_8dba_e3375f935a86.slice. Jan 27 05:39:53.815682 kubelet[2895]: I0127 05:39:53.815389 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94542ac6-892a-4ec1-b71f-6197c63a4798-whisker-backend-key-pair\") pod \"whisker-56694d88f5-h25k4\" (UID: \"94542ac6-892a-4ec1-b71f-6197c63a4798\") " pod="calico-system/whisker-56694d88f5-h25k4" Jan 27 05:39:53.815682 kubelet[2895]: I0127 05:39:53.815421 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e28c560f-1255-4106-9071-95e1b99b107b-config-volume\") pod \"coredns-668d6bf9bc-2xtdt\" (UID: \"e28c560f-1255-4106-9071-95e1b99b107b\") " pod="kube-system/coredns-668d6bf9bc-2xtdt" Jan 27 05:39:53.815682 kubelet[2895]: I0127 05:39:53.815441 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ptxs\" (UniqueName: \"kubernetes.io/projected/2a3676d6-dbc6-4326-8dba-e3375f935a86-kube-api-access-9ptxs\") pod \"calico-apiserver-6d9df44df9-7r4l6\" (UID: \"2a3676d6-dbc6-4326-8dba-e3375f935a86\") " pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" Jan 27 05:39:53.815682 kubelet[2895]: I0127 05:39:53.815457 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0babb8ec-614b-4d27-a5c4-cb80e7016a50-config-volume\") pod \"coredns-668d6bf9bc-nrrrf\" (UID: \"0babb8ec-614b-4d27-a5c4-cb80e7016a50\") " pod="kube-system/coredns-668d6bf9bc-nrrrf" Jan 27 05:39:53.815682 kubelet[2895]: I0127 05:39:53.815476 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t2sx\" (UniqueName: \"kubernetes.io/projected/94542ac6-892a-4ec1-b71f-6197c63a4798-kube-api-access-2t2sx\") pod \"whisker-56694d88f5-h25k4\" (UID: \"94542ac6-892a-4ec1-b71f-6197c63a4798\") " pod="calico-system/whisker-56694d88f5-h25k4" Jan 27 05:39:53.815902 kubelet[2895]: I0127 05:39:53.815512 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2a3676d6-dbc6-4326-8dba-e3375f935a86-calico-apiserver-certs\") pod \"calico-apiserver-6d9df44df9-7r4l6\" (UID: \"2a3676d6-dbc6-4326-8dba-e3375f935a86\") " pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" Jan 27 05:39:53.815902 kubelet[2895]: I0127 05:39:53.815530 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf580b0a-7ab1-4b43-ad9f-7219ad766e09-tigera-ca-bundle\") pod \"calico-kube-controllers-68f86b6c77-cbrw7\" (UID: \"cf580b0a-7ab1-4b43-ad9f-7219ad766e09\") " pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" Jan 27 05:39:53.815902 kubelet[2895]: I0127 05:39:53.815545 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94542ac6-892a-4ec1-b71f-6197c63a4798-whisker-ca-bundle\") pod \"whisker-56694d88f5-h25k4\" (UID: \"94542ac6-892a-4ec1-b71f-6197c63a4798\") " pod="calico-system/whisker-56694d88f5-h25k4" Jan 27 05:39:53.815902 kubelet[2895]: I0127 05:39:53.815559 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz57s\" (UniqueName: \"kubernetes.io/projected/7997e895-ab0d-47da-83eb-264fa47d7c87-kube-api-access-kz57s\") pod \"calico-apiserver-6d9df44df9-9vzkl\" (UID: \"7997e895-ab0d-47da-83eb-264fa47d7c87\") " pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" Jan 27 05:39:53.815902 kubelet[2895]: I0127 05:39:53.815573 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5fn4\" (UniqueName: \"kubernetes.io/projected/e28c560f-1255-4106-9071-95e1b99b107b-kube-api-access-c5fn4\") pod \"coredns-668d6bf9bc-2xtdt\" (UID: \"e28c560f-1255-4106-9071-95e1b99b107b\") " pod="kube-system/coredns-668d6bf9bc-2xtdt" Jan 27 05:39:53.816211 kubelet[2895]: I0127 05:39:53.815589 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22258eaf-cd76-4bd2-ad47-8f4a85b664bd-goldmane-ca-bundle\") pod \"goldmane-666569f655-kbs8f\" (UID: \"22258eaf-cd76-4bd2-ad47-8f4a85b664bd\") " pod="calico-system/goldmane-666569f655-kbs8f" Jan 27 05:39:53.816211 kubelet[2895]: I0127 05:39:53.815606 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcm2g\" (UniqueName: \"kubernetes.io/projected/0babb8ec-614b-4d27-a5c4-cb80e7016a50-kube-api-access-jcm2g\") pod \"coredns-668d6bf9bc-nrrrf\" (UID: \"0babb8ec-614b-4d27-a5c4-cb80e7016a50\") " pod="kube-system/coredns-668d6bf9bc-nrrrf" Jan 27 05:39:53.816211 kubelet[2895]: I0127 05:39:53.815620 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq9j5\" (UniqueName: \"kubernetes.io/projected/cf580b0a-7ab1-4b43-ad9f-7219ad766e09-kube-api-access-jq9j5\") pod \"calico-kube-controllers-68f86b6c77-cbrw7\" (UID: \"cf580b0a-7ab1-4b43-ad9f-7219ad766e09\") " pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" Jan 27 05:39:53.816211 kubelet[2895]: I0127 05:39:53.815634 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7997e895-ab0d-47da-83eb-264fa47d7c87-calico-apiserver-certs\") pod \"calico-apiserver-6d9df44df9-9vzkl\" (UID: \"7997e895-ab0d-47da-83eb-264fa47d7c87\") " pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" Jan 27 05:39:53.816211 kubelet[2895]: I0127 05:39:53.815649 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22258eaf-cd76-4bd2-ad47-8f4a85b664bd-config\") pod \"goldmane-666569f655-kbs8f\" (UID: \"22258eaf-cd76-4bd2-ad47-8f4a85b664bd\") " pod="calico-system/goldmane-666569f655-kbs8f" Jan 27 05:39:53.816319 kubelet[2895]: I0127 05:39:53.815664 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/22258eaf-cd76-4bd2-ad47-8f4a85b664bd-goldmane-key-pair\") pod \"goldmane-666569f655-kbs8f\" (UID: \"22258eaf-cd76-4bd2-ad47-8f4a85b664bd\") " pod="calico-system/goldmane-666569f655-kbs8f" Jan 27 05:39:53.816319 kubelet[2895]: I0127 05:39:53.815677 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2fvx\" (UniqueName: \"kubernetes.io/projected/22258eaf-cd76-4bd2-ad47-8f4a85b664bd-kube-api-access-f2fvx\") pod \"goldmane-666569f655-kbs8f\" (UID: \"22258eaf-cd76-4bd2-ad47-8f4a85b664bd\") " pod="calico-system/goldmane-666569f655-kbs8f" Jan 27 05:39:54.385784 containerd[1681]: time="2026-01-27T05:39:54.385604861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2xtdt,Uid:e28c560f-1255-4106-9071-95e1b99b107b,Namespace:kube-system,Attempt:0,}" Jan 27 05:39:54.643212 containerd[1681]: time="2026-01-27T05:39:54.643134287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrrrf,Uid:0babb8ec-614b-4d27-a5c4-cb80e7016a50,Namespace:kube-system,Attempt:0,}" Jan 27 05:39:54.653791 containerd[1681]: time="2026-01-27T05:39:54.653762834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56694d88f5-h25k4,Uid:94542ac6-892a-4ec1-b71f-6197c63a4798,Namespace:calico-system,Attempt:0,}" Jan 27 05:39:54.662240 containerd[1681]: time="2026-01-27T05:39:54.662204931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9df44df9-9vzkl,Uid:7997e895-ab0d-47da-83eb-264fa47d7c87,Namespace:calico-apiserver,Attempt:0,}" Jan 27 05:39:54.670832 containerd[1681]: time="2026-01-27T05:39:54.670774856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68f86b6c77-cbrw7,Uid:cf580b0a-7ab1-4b43-ad9f-7219ad766e09,Namespace:calico-system,Attempt:0,}" Jan 27 05:39:54.679671 containerd[1681]: time="2026-01-27T05:39:54.679604888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kbs8f,Uid:22258eaf-cd76-4bd2-ad47-8f4a85b664bd,Namespace:calico-system,Attempt:0,}" Jan 27 05:39:54.691625 containerd[1681]: time="2026-01-27T05:39:54.691552654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9df44df9-7r4l6,Uid:2a3676d6-dbc6-4326-8dba-e3375f935a86,Namespace:calico-apiserver,Attempt:0,}" Jan 27 05:39:55.290682 containerd[1681]: time="2026-01-27T05:39:55.290572767Z" level=error msg="Failed to destroy network for sandbox \"a20811429ae3c6097fd3d8ec5114848060daa6f38914f89744498946f7480572\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.295900 containerd[1681]: time="2026-01-27T05:39:55.295815720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56694d88f5-h25k4,Uid:94542ac6-892a-4ec1-b71f-6197c63a4798,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20811429ae3c6097fd3d8ec5114848060daa6f38914f89744498946f7480572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.296429 kubelet[2895]: E0127 05:39:55.296296 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20811429ae3c6097fd3d8ec5114848060daa6f38914f89744498946f7480572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.296429 kubelet[2895]: E0127 05:39:55.296370 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20811429ae3c6097fd3d8ec5114848060daa6f38914f89744498946f7480572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56694d88f5-h25k4" Jan 27 05:39:55.296429 kubelet[2895]: E0127 05:39:55.296390 2895 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20811429ae3c6097fd3d8ec5114848060daa6f38914f89744498946f7480572\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56694d88f5-h25k4" Jan 27 05:39:55.297083 kubelet[2895]: E0127 05:39:55.296442 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56694d88f5-h25k4_calico-system(94542ac6-892a-4ec1-b71f-6197c63a4798)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56694d88f5-h25k4_calico-system(94542ac6-892a-4ec1-b71f-6197c63a4798)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a20811429ae3c6097fd3d8ec5114848060daa6f38914f89744498946f7480572\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56694d88f5-h25k4" podUID="94542ac6-892a-4ec1-b71f-6197c63a4798" Jan 27 05:39:55.317428 containerd[1681]: time="2026-01-27T05:39:55.317381162Z" level=error msg="Failed to destroy network for sandbox \"89e8ef1d3320e844cb013ba7c8d9b88377b025a0e69d46710b075fb0fd93599c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.319743 containerd[1681]: time="2026-01-27T05:39:55.319671909Z" level=error msg="Failed to destroy network for sandbox \"1f7e40497a11803deabd27b73b271fc56bed0730e63557fa18326f2f0e0472e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.326504 containerd[1681]: time="2026-01-27T05:39:55.326455986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrrrf,Uid:0babb8ec-614b-4d27-a5c4-cb80e7016a50,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e8ef1d3320e844cb013ba7c8d9b88377b025a0e69d46710b075fb0fd93599c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.327446 kubelet[2895]: E0127 05:39:55.326848 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e8ef1d3320e844cb013ba7c8d9b88377b025a0e69d46710b075fb0fd93599c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.327446 kubelet[2895]: E0127 05:39:55.326899 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e8ef1d3320e844cb013ba7c8d9b88377b025a0e69d46710b075fb0fd93599c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nrrrf" Jan 27 05:39:55.327446 kubelet[2895]: E0127 05:39:55.326918 2895 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e8ef1d3320e844cb013ba7c8d9b88377b025a0e69d46710b075fb0fd93599c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nrrrf" Jan 27 05:39:55.327595 kubelet[2895]: E0127 05:39:55.326956 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nrrrf_kube-system(0babb8ec-614b-4d27-a5c4-cb80e7016a50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nrrrf_kube-system(0babb8ec-614b-4d27-a5c4-cb80e7016a50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89e8ef1d3320e844cb013ba7c8d9b88377b025a0e69d46710b075fb0fd93599c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nrrrf" podUID="0babb8ec-614b-4d27-a5c4-cb80e7016a50" Jan 27 05:39:55.329792 containerd[1681]: time="2026-01-27T05:39:55.329741386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2xtdt,Uid:e28c560f-1255-4106-9071-95e1b99b107b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7e40497a11803deabd27b73b271fc56bed0730e63557fa18326f2f0e0472e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.330356 kubelet[2895]: E0127 05:39:55.329915 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7e40497a11803deabd27b73b271fc56bed0730e63557fa18326f2f0e0472e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.330356 kubelet[2895]: E0127 05:39:55.330054 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7e40497a11803deabd27b73b271fc56bed0730e63557fa18326f2f0e0472e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2xtdt" Jan 27 05:39:55.330356 kubelet[2895]: E0127 05:39:55.330079 2895 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7e40497a11803deabd27b73b271fc56bed0730e63557fa18326f2f0e0472e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2xtdt" Jan 27 05:39:55.330465 kubelet[2895]: E0127 05:39:55.330115 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2xtdt_kube-system(e28c560f-1255-4106-9071-95e1b99b107b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2xtdt_kube-system(e28c560f-1255-4106-9071-95e1b99b107b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f7e40497a11803deabd27b73b271fc56bed0730e63557fa18326f2f0e0472e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2xtdt" podUID="e28c560f-1255-4106-9071-95e1b99b107b" Jan 27 05:39:55.330968 containerd[1681]: time="2026-01-27T05:39:55.330896928Z" level=error msg="Failed to destroy network for sandbox \"05a701193f5d5635ba9c54f3ef9f52d984fd1cc35d196e1749f1b7b2b8835e0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.333390 containerd[1681]: time="2026-01-27T05:39:55.333345846Z" level=error msg="Failed to destroy network for sandbox \"69ca4998edb7831534f06a1c05cafa86bc5af8c2722f4b990f3debd936a2e226\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.334094 containerd[1681]: time="2026-01-27T05:39:55.334073947Z" level=error msg="Failed to destroy network for sandbox \"ebb4cf70866fad5483e4f34186f426e9420e45f33eebe2f4660ae8b3267de681\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.336281 containerd[1681]: time="2026-01-27T05:39:55.336221238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kbs8f,Uid:22258eaf-cd76-4bd2-ad47-8f4a85b664bd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69ca4998edb7831534f06a1c05cafa86bc5af8c2722f4b990f3debd936a2e226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.336847 kubelet[2895]: E0127 05:39:55.336823 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69ca4998edb7831534f06a1c05cafa86bc5af8c2722f4b990f3debd936a2e226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.337188 kubelet[2895]: E0127 05:39:55.336931 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69ca4998edb7831534f06a1c05cafa86bc5af8c2722f4b990f3debd936a2e226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kbs8f" Jan 27 05:39:55.337188 kubelet[2895]: E0127 05:39:55.336955 2895 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69ca4998edb7831534f06a1c05cafa86bc5af8c2722f4b990f3debd936a2e226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kbs8f" Jan 27 05:39:55.337188 kubelet[2895]: E0127 05:39:55.336986 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-kbs8f_calico-system(22258eaf-cd76-4bd2-ad47-8f4a85b664bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-kbs8f_calico-system(22258eaf-cd76-4bd2-ad47-8f4a85b664bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69ca4998edb7831534f06a1c05cafa86bc5af8c2722f4b990f3debd936a2e226\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:39:55.338511 containerd[1681]: time="2026-01-27T05:39:55.338435456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9df44df9-9vzkl,Uid:7997e895-ab0d-47da-83eb-264fa47d7c87,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a701193f5d5635ba9c54f3ef9f52d984fd1cc35d196e1749f1b7b2b8835e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.338620 kubelet[2895]: E0127 05:39:55.338594 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a701193f5d5635ba9c54f3ef9f52d984fd1cc35d196e1749f1b7b2b8835e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.338661 kubelet[2895]: E0127 05:39:55.338635 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a701193f5d5635ba9c54f3ef9f52d984fd1cc35d196e1749f1b7b2b8835e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" Jan 27 05:39:55.338661 kubelet[2895]: E0127 05:39:55.338652 2895 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a701193f5d5635ba9c54f3ef9f52d984fd1cc35d196e1749f1b7b2b8835e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" Jan 27 05:39:55.338707 kubelet[2895]: E0127 05:39:55.338687 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d9df44df9-9vzkl_calico-apiserver(7997e895-ab0d-47da-83eb-264fa47d7c87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d9df44df9-9vzkl_calico-apiserver(7997e895-ab0d-47da-83eb-264fa47d7c87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05a701193f5d5635ba9c54f3ef9f52d984fd1cc35d196e1749f1b7b2b8835e0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:39:55.342107 containerd[1681]: time="2026-01-27T05:39:55.341822428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9df44df9-7r4l6,Uid:2a3676d6-dbc6-4326-8dba-e3375f935a86,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebb4cf70866fad5483e4f34186f426e9420e45f33eebe2f4660ae8b3267de681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.342211 kubelet[2895]: E0127 05:39:55.341964 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebb4cf70866fad5483e4f34186f426e9420e45f33eebe2f4660ae8b3267de681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.342211 kubelet[2895]: E0127 05:39:55.341998 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebb4cf70866fad5483e4f34186f426e9420e45f33eebe2f4660ae8b3267de681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" Jan 27 05:39:55.342211 kubelet[2895]: E0127 05:39:55.342017 2895 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebb4cf70866fad5483e4f34186f426e9420e45f33eebe2f4660ae8b3267de681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" Jan 27 05:39:55.342289 kubelet[2895]: E0127 05:39:55.342068 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d9df44df9-7r4l6_calico-apiserver(2a3676d6-dbc6-4326-8dba-e3375f935a86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d9df44df9-7r4l6_calico-apiserver(2a3676d6-dbc6-4326-8dba-e3375f935a86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebb4cf70866fad5483e4f34186f426e9420e45f33eebe2f4660ae8b3267de681\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:39:55.346208 containerd[1681]: time="2026-01-27T05:39:55.346170042Z" level=error msg="Failed to destroy network for sandbox \"62dce398066844babce32856b00dc508a3ca4365910cd90594ede84f50ccb2e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.349526 containerd[1681]: time="2026-01-27T05:39:55.349484238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68f86b6c77-cbrw7,Uid:cf580b0a-7ab1-4b43-ad9f-7219ad766e09,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dce398066844babce32856b00dc508a3ca4365910cd90594ede84f50ccb2e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.349818 kubelet[2895]: E0127 05:39:55.349794 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dce398066844babce32856b00dc508a3ca4365910cd90594ede84f50ccb2e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.349872 kubelet[2895]: E0127 05:39:55.349837 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dce398066844babce32856b00dc508a3ca4365910cd90594ede84f50ccb2e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" Jan 27 05:39:55.349872 kubelet[2895]: E0127 05:39:55.349852 2895 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dce398066844babce32856b00dc508a3ca4365910cd90594ede84f50ccb2e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" Jan 27 05:39:55.349926 kubelet[2895]: E0127 05:39:55.349883 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68f86b6c77-cbrw7_calico-system(cf580b0a-7ab1-4b43-ad9f-7219ad766e09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68f86b6c77-cbrw7_calico-system(cf580b0a-7ab1-4b43-ad9f-7219ad766e09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62dce398066844babce32856b00dc508a3ca4365910cd90594ede84f50ccb2e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:39:55.510004 systemd[1]: Created slice kubepods-besteffort-pod7ea10135_90f4_4815_b58a_eefd271d18ce.slice - libcontainer container kubepods-besteffort-pod7ea10135_90f4_4815_b58a_eefd271d18ce.slice. Jan 27 05:39:55.511748 containerd[1681]: time="2026-01-27T05:39:55.511721115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mf6bj,Uid:7ea10135-90f4-4815-b58a-eefd271d18ce,Namespace:calico-system,Attempt:0,}" Jan 27 05:39:55.552764 containerd[1681]: time="2026-01-27T05:39:55.552610179Z" level=error msg="Failed to destroy network for sandbox \"32bff79d5362f9878b7df771e360cf932df6068cb86a041d54cc14c5eef36096\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.556362 containerd[1681]: time="2026-01-27T05:39:55.556259438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mf6bj,Uid:7ea10135-90f4-4815-b58a-eefd271d18ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32bff79d5362f9878b7df771e360cf932df6068cb86a041d54cc14c5eef36096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.556802 kubelet[2895]: E0127 05:39:55.556705 2895 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32bff79d5362f9878b7df771e360cf932df6068cb86a041d54cc14c5eef36096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:39:55.556802 kubelet[2895]: E0127 05:39:55.556769 2895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32bff79d5362f9878b7df771e360cf932df6068cb86a041d54cc14c5eef36096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mf6bj" Jan 27 05:39:55.556974 kubelet[2895]: E0127 05:39:55.556789 2895 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32bff79d5362f9878b7df771e360cf932df6068cb86a041d54cc14c5eef36096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mf6bj" Jan 27 05:39:55.556974 kubelet[2895]: E0127 05:39:55.556921 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32bff79d5362f9878b7df771e360cf932df6068cb86a041d54cc14c5eef36096\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:39:55.622599 containerd[1681]: time="2026-01-27T05:39:55.622370488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 27 05:39:55.641434 systemd[1]: run-netns-cni\x2d46e772a4\x2dc1d7\x2d7d55\x2de6cc\x2ddbc6c945d61f.mount: Deactivated successfully. Jan 27 05:39:55.641547 systemd[1]: run-netns-cni\x2da30a01ab\x2d7cea\x2d5677\x2d2ab5\x2d633dfdbf6d92.mount: Deactivated successfully. Jan 27 05:39:55.641621 systemd[1]: run-netns-cni\x2dbf78d24e\x2da361\x2df5de\x2d843a\x2d930424d1096d.mount: Deactivated successfully. Jan 27 05:39:55.641692 systemd[1]: run-netns-cni\x2d50c4e096\x2d32eb\x2d3d68\x2dfd05\x2d34f9f3885e95.mount: Deactivated successfully. Jan 27 05:40:00.780507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905451381.mount: Deactivated successfully. Jan 27 05:40:00.804109 containerd[1681]: time="2026-01-27T05:40:00.804057081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:00.806254 containerd[1681]: time="2026-01-27T05:40:00.806082870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 27 05:40:00.807906 containerd[1681]: time="2026-01-27T05:40:00.807879735Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:00.811691 containerd[1681]: time="2026-01-27T05:40:00.811666766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:00.812212 containerd[1681]: time="2026-01-27T05:40:00.812186885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.189781399s" Jan 27 05:40:00.812295 containerd[1681]: time="2026-01-27T05:40:00.812283675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 27 05:40:00.826546 containerd[1681]: time="2026-01-27T05:40:00.826515530Z" level=info msg="CreateContainer within sandbox \"32e2b912650095d4ffacc48ab260b94652cb9fb4244c699ca83a790a25364ef9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 27 05:40:00.844142 containerd[1681]: time="2026-01-27T05:40:00.843075008Z" level=info msg="Container 447f356a43eec69c8ea8d0517b657b5af49fb809b152c0b0a8e73a1ea9d7ca5c: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:40:00.852879 containerd[1681]: time="2026-01-27T05:40:00.852843814Z" level=info msg="CreateContainer within sandbox \"32e2b912650095d4ffacc48ab260b94652cb9fb4244c699ca83a790a25364ef9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"447f356a43eec69c8ea8d0517b657b5af49fb809b152c0b0a8e73a1ea9d7ca5c\"" Jan 27 05:40:00.853615 containerd[1681]: time="2026-01-27T05:40:00.853586706Z" level=info msg="StartContainer for \"447f356a43eec69c8ea8d0517b657b5af49fb809b152c0b0a8e73a1ea9d7ca5c\"" Jan 27 05:40:00.854926 containerd[1681]: time="2026-01-27T05:40:00.854902574Z" level=info msg="connecting to shim 447f356a43eec69c8ea8d0517b657b5af49fb809b152c0b0a8e73a1ea9d7ca5c" address="unix:///run/containerd/s/da21c4e50fd1d33ed3da8c70dee88938029449637702cdf66ecf578249be2273" protocol=ttrpc version=3 Jan 27 05:40:00.909202 systemd[1]: Started cri-containerd-447f356a43eec69c8ea8d0517b657b5af49fb809b152c0b0a8e73a1ea9d7ca5c.scope - libcontainer container 447f356a43eec69c8ea8d0517b657b5af49fb809b152c0b0a8e73a1ea9d7ca5c. Jan 27 05:40:00.969000 audit: BPF prog-id=175 op=LOAD Jan 27 05:40:00.971464 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 27 05:40:00.971537 kernel: audit: type=1334 audit(1769492400.969:577): prog-id=175 op=LOAD Jan 27 05:40:00.969000 audit[3885]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3394 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376633353661343365656336396338656138643035313762363537 Jan 27 05:40:00.979248 kernel: audit: type=1300 audit(1769492400.969:577): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3394 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.979307 kernel: audit: type=1327 audit(1769492400.969:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376633353661343365656336396338656138643035313762363537 Jan 27 05:40:00.969000 audit: BPF prog-id=176 op=LOAD Jan 27 05:40:00.982071 kernel: audit: type=1334 audit(1769492400.969:578): prog-id=176 op=LOAD Jan 27 05:40:00.969000 audit[3885]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3394 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.984410 kernel: audit: type=1300 audit(1769492400.969:578): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3394 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376633353661343365656336396338656138643035313762363537 Jan 27 05:40:00.970000 audit: BPF prog-id=176 op=UNLOAD Jan 27 05:40:00.993291 kernel: audit: type=1327 audit(1769492400.969:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376633353661343365656336396338656138643035313762363537 Jan 27 05:40:00.993335 kernel: audit: type=1334 audit(1769492400.970:579): prog-id=176 op=UNLOAD Jan 27 05:40:00.970000 audit[3885]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376633353661343365656336396338656138643035313762363537 Jan 27 05:40:01.000328 kernel: audit: type=1300 audit(1769492400.970:579): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:01.000379 kernel: audit: type=1327 audit(1769492400.970:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376633353661343365656336396338656138643035313762363537 Jan 27 05:40:00.970000 audit: BPF prog-id=175 op=UNLOAD Jan 27 05:40:01.003130 kernel: audit: type=1334 audit(1769492400.970:580): prog-id=175 op=UNLOAD Jan 27 05:40:00.970000 audit[3885]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376633353661343365656336396338656138643035313762363537 Jan 27 05:40:00.970000 audit: BPF prog-id=177 op=LOAD Jan 27 05:40:00.970000 audit[3885]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3394 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376633353661343365656336396338656138643035313762363537 Jan 27 05:40:01.014211 containerd[1681]: time="2026-01-27T05:40:01.014172406Z" level=info msg="StartContainer for \"447f356a43eec69c8ea8d0517b657b5af49fb809b152c0b0a8e73a1ea9d7ca5c\" returns successfully" Jan 27 05:40:01.101662 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 27 05:40:01.101778 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 27 05:40:01.368741 kubelet[2895]: I0127 05:40:01.368683 2895 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94542ac6-892a-4ec1-b71f-6197c63a4798-whisker-backend-key-pair\") pod \"94542ac6-892a-4ec1-b71f-6197c63a4798\" (UID: \"94542ac6-892a-4ec1-b71f-6197c63a4798\") " Jan 27 05:40:01.369839 kubelet[2895]: I0127 05:40:01.369436 2895 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t2sx\" (UniqueName: \"kubernetes.io/projected/94542ac6-892a-4ec1-b71f-6197c63a4798-kube-api-access-2t2sx\") pod \"94542ac6-892a-4ec1-b71f-6197c63a4798\" (UID: \"94542ac6-892a-4ec1-b71f-6197c63a4798\") " Jan 27 05:40:01.369839 kubelet[2895]: I0127 05:40:01.369477 2895 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94542ac6-892a-4ec1-b71f-6197c63a4798-whisker-ca-bundle\") pod \"94542ac6-892a-4ec1-b71f-6197c63a4798\" (UID: \"94542ac6-892a-4ec1-b71f-6197c63a4798\") " Jan 27 05:40:01.373137 kubelet[2895]: I0127 05:40:01.373085 2895 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94542ac6-892a-4ec1-b71f-6197c63a4798-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "94542ac6-892a-4ec1-b71f-6197c63a4798" (UID: "94542ac6-892a-4ec1-b71f-6197c63a4798"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 27 05:40:01.376122 kubelet[2895]: I0127 05:40:01.376097 2895 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94542ac6-892a-4ec1-b71f-6197c63a4798-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "94542ac6-892a-4ec1-b71f-6197c63a4798" (UID: "94542ac6-892a-4ec1-b71f-6197c63a4798"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 27 05:40:01.376539 kubelet[2895]: I0127 05:40:01.376523 2895 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94542ac6-892a-4ec1-b71f-6197c63a4798-kube-api-access-2t2sx" (OuterVolumeSpecName: "kube-api-access-2t2sx") pod "94542ac6-892a-4ec1-b71f-6197c63a4798" (UID: "94542ac6-892a-4ec1-b71f-6197c63a4798"). InnerVolumeSpecName "kube-api-access-2t2sx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 27 05:40:01.470412 kubelet[2895]: I0127 05:40:01.470377 2895 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94542ac6-892a-4ec1-b71f-6197c63a4798-whisker-ca-bundle\") on node \"ci-4592-0-0-n-eb4c5d05b1\" DevicePath \"\"" Jan 27 05:40:01.470412 kubelet[2895]: I0127 05:40:01.470409 2895 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94542ac6-892a-4ec1-b71f-6197c63a4798-whisker-backend-key-pair\") on node \"ci-4592-0-0-n-eb4c5d05b1\" DevicePath \"\"" Jan 27 05:40:01.470412 kubelet[2895]: I0127 05:40:01.470420 2895 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2t2sx\" (UniqueName: \"kubernetes.io/projected/94542ac6-892a-4ec1-b71f-6197c63a4798-kube-api-access-2t2sx\") on node \"ci-4592-0-0-n-eb4c5d05b1\" DevicePath \"\"" Jan 27 05:40:01.512701 systemd[1]: Removed slice kubepods-besteffort-pod94542ac6_892a_4ec1_b71f_6197c63a4798.slice - libcontainer container kubepods-besteffort-pod94542ac6_892a_4ec1_b71f_6197c63a4798.slice. Jan 27 05:40:01.662911 kubelet[2895]: I0127 05:40:01.662748 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f7dmm" podStartSLOduration=1.707299405 podStartE2EDuration="18.662731963s" podCreationTimestamp="2026-01-27 05:39:43 +0000 UTC" firstStartedPulling="2026-01-27 05:39:43.857500595 +0000 UTC m=+18.451999127" lastFinishedPulling="2026-01-27 05:40:00.812933154 +0000 UTC m=+35.407431685" observedRunningTime="2026-01-27 05:40:01.661668206 +0000 UTC m=+36.256166763" watchObservedRunningTime="2026-01-27 05:40:01.662731963 +0000 UTC m=+36.257230518" Jan 27 05:40:01.723904 systemd[1]: Created slice kubepods-besteffort-pod030cf2c9_9900_4225_8b2a_d77c13f08480.slice - libcontainer container kubepods-besteffort-pod030cf2c9_9900_4225_8b2a_d77c13f08480.slice. Jan 27 05:40:01.772818 kubelet[2895]: I0127 05:40:01.772779 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/030cf2c9-9900-4225-8b2a-d77c13f08480-whisker-ca-bundle\") pod \"whisker-55bd985ccf-jxmbn\" (UID: \"030cf2c9-9900-4225-8b2a-d77c13f08480\") " pod="calico-system/whisker-55bd985ccf-jxmbn" Jan 27 05:40:01.772818 kubelet[2895]: I0127 05:40:01.772828 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdngs\" (UniqueName: \"kubernetes.io/projected/030cf2c9-9900-4225-8b2a-d77c13f08480-kube-api-access-mdngs\") pod \"whisker-55bd985ccf-jxmbn\" (UID: \"030cf2c9-9900-4225-8b2a-d77c13f08480\") " pod="calico-system/whisker-55bd985ccf-jxmbn" Jan 27 05:40:01.772955 kubelet[2895]: I0127 05:40:01.772865 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/030cf2c9-9900-4225-8b2a-d77c13f08480-whisker-backend-key-pair\") pod \"whisker-55bd985ccf-jxmbn\" (UID: \"030cf2c9-9900-4225-8b2a-d77c13f08480\") " pod="calico-system/whisker-55bd985ccf-jxmbn" Jan 27 05:40:01.780166 systemd[1]: var-lib-kubelet-pods-94542ac6\x2d892a\x2d4ec1\x2db71f\x2d6197c63a4798-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 27 05:40:01.780263 systemd[1]: var-lib-kubelet-pods-94542ac6\x2d892a\x2d4ec1\x2db71f\x2d6197c63a4798-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2t2sx.mount: Deactivated successfully. Jan 27 05:40:02.028706 containerd[1681]: time="2026-01-27T05:40:02.028534908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bd985ccf-jxmbn,Uid:030cf2c9-9900-4225-8b2a-d77c13f08480,Namespace:calico-system,Attempt:0,}" Jan 27 05:40:02.238695 systemd-networkd[1489]: cali4fac0f532e0: Link UP Jan 27 05:40:02.239234 systemd-networkd[1489]: cali4fac0f532e0: Gained carrier Jan 27 05:40:02.255254 containerd[1681]: 2026-01-27 05:40:02.057 [INFO][3954] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 27 05:40:02.255254 containerd[1681]: 2026-01-27 05:40:02.160 [INFO][3954] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0 whisker-55bd985ccf- calico-system 030cf2c9-9900-4225-8b2a-d77c13f08480 862 0 2026-01-27 05:40:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55bd985ccf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4592-0-0-n-eb4c5d05b1 whisker-55bd985ccf-jxmbn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4fac0f532e0 [] [] }} ContainerID="49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" Namespace="calico-system" Pod="whisker-55bd985ccf-jxmbn" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-" Jan 27 05:40:02.255254 containerd[1681]: 2026-01-27 05:40:02.161 [INFO][3954] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" Namespace="calico-system" Pod="whisker-55bd985ccf-jxmbn" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0" Jan 27 05:40:02.255254 containerd[1681]: 2026-01-27 05:40:02.188 [INFO][3966] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" HandleID="k8s-pod-network.49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0" Jan 27 05:40:02.255455 containerd[1681]: 2026-01-27 05:40:02.189 [INFO][3966] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" HandleID="k8s-pod-network.49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f230), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4592-0-0-n-eb4c5d05b1", "pod":"whisker-55bd985ccf-jxmbn", "timestamp":"2026-01-27 05:40:02.188994146 +0000 UTC"}, Hostname:"ci-4592-0-0-n-eb4c5d05b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:40:02.255455 containerd[1681]: 2026-01-27 05:40:02.189 [INFO][3966] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:40:02.255455 containerd[1681]: 2026-01-27 05:40:02.189 [INFO][3966] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:40:02.255455 containerd[1681]: 2026-01-27 05:40:02.189 [INFO][3966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-eb4c5d05b1' Jan 27 05:40:02.255455 containerd[1681]: 2026-01-27 05:40:02.196 [INFO][3966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:02.255455 containerd[1681]: 2026-01-27 05:40:02.200 [INFO][3966] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:02.255455 containerd[1681]: 2026-01-27 05:40:02.204 [INFO][3966] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:02.255455 containerd[1681]: 2026-01-27 05:40:02.205 [INFO][3966] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:02.255455 containerd[1681]: 2026-01-27 05:40:02.207 [INFO][3966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:02.255646 containerd[1681]: 2026-01-27 05:40:02.207 [INFO][3966] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:02.255646 containerd[1681]: 2026-01-27 05:40:02.208 [INFO][3966] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a Jan 27 05:40:02.255646 containerd[1681]: 2026-01-27 05:40:02.212 [INFO][3966] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:02.255646 containerd[1681]: 2026-01-27 05:40:02.219 [INFO][3966] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.65/26] block=192.168.51.64/26 handle="k8s-pod-network.49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:02.255646 containerd[1681]: 2026-01-27 05:40:02.219 [INFO][3966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.65/26] handle="k8s-pod-network.49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:02.255646 containerd[1681]: 2026-01-27 05:40:02.219 [INFO][3966] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:40:02.255646 containerd[1681]: 2026-01-27 05:40:02.219 [INFO][3966] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.65/26] IPv6=[] ContainerID="49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" HandleID="k8s-pod-network.49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0" Jan 27 05:40:02.255777 containerd[1681]: 2026-01-27 05:40:02.222 [INFO][3954] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" Namespace="calico-system" Pod="whisker-55bd985ccf-jxmbn" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0", GenerateName:"whisker-55bd985ccf-", Namespace:"calico-system", SelfLink:"", UID:"030cf2c9-9900-4225-8b2a-d77c13f08480", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55bd985ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"", Pod:"whisker-55bd985ccf-jxmbn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4fac0f532e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:02.255777 containerd[1681]: 2026-01-27 05:40:02.222 [INFO][3954] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.65/32] ContainerID="49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" Namespace="calico-system" Pod="whisker-55bd985ccf-jxmbn" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0" Jan 27 05:40:02.255848 containerd[1681]: 2026-01-27 05:40:02.222 [INFO][3954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fac0f532e0 ContainerID="49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" Namespace="calico-system" Pod="whisker-55bd985ccf-jxmbn" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0" Jan 27 05:40:02.255848 containerd[1681]: 2026-01-27 05:40:02.240 [INFO][3954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" Namespace="calico-system" Pod="whisker-55bd985ccf-jxmbn" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0" Jan 27 05:40:02.255891 containerd[1681]: 2026-01-27 05:40:02.240 [INFO][3954] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" Namespace="calico-system" Pod="whisker-55bd985ccf-jxmbn" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0", GenerateName:"whisker-55bd985ccf-", Namespace:"calico-system", SelfLink:"", UID:"030cf2c9-9900-4225-8b2a-d77c13f08480", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55bd985ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a", Pod:"whisker-55bd985ccf-jxmbn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4fac0f532e0", MAC:"ba:56:e9:7f:ee:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:02.255939 containerd[1681]: 2026-01-27 05:40:02.253 [INFO][3954] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" Namespace="calico-system" Pod="whisker-55bd985ccf-jxmbn" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-whisker--55bd985ccf--jxmbn-eth0" Jan 27 05:40:02.309536 containerd[1681]: time="2026-01-27T05:40:02.309440121Z" level=info msg="connecting to shim 49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a" address="unix:///run/containerd/s/4296bd1b9ed50a9ed78735c74627469cf947336c402a134ecc7beece889da7a0" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:02.333204 systemd[1]: Started cri-containerd-49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a.scope - libcontainer container 49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a. Jan 27 05:40:02.342000 audit: BPF prog-id=178 op=LOAD Jan 27 05:40:02.343000 audit: BPF prog-id=179 op=LOAD Jan 27 05:40:02.343000 audit[3998]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3986 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:02.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439636364306537333066333837643961623462633634363935343938 Jan 27 05:40:02.343000 audit: BPF prog-id=179 op=UNLOAD Jan 27 05:40:02.343000 audit[3998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3986 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:02.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439636364306537333066333837643961623462633634363935343938 Jan 27 05:40:02.343000 audit: BPF prog-id=180 op=LOAD Jan 27 05:40:02.343000 audit[3998]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3986 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:02.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439636364306537333066333837643961623462633634363935343938 Jan 27 05:40:02.343000 audit: BPF prog-id=181 op=LOAD Jan 27 05:40:02.343000 audit[3998]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3986 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:02.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439636364306537333066333837643961623462633634363935343938 Jan 27 05:40:02.343000 audit: BPF prog-id=181 op=UNLOAD Jan 27 05:40:02.343000 audit[3998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3986 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:02.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439636364306537333066333837643961623462633634363935343938 Jan 27 05:40:02.343000 audit: BPF prog-id=180 op=UNLOAD Jan 27 05:40:02.343000 audit[3998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3986 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:02.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439636364306537333066333837643961623462633634363935343938 Jan 27 05:40:02.343000 audit: BPF prog-id=182 op=LOAD Jan 27 05:40:02.343000 audit[3998]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3986 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:02.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439636364306537333066333837643961623462633634363935343938 Jan 27 05:40:02.378750 containerd[1681]: time="2026-01-27T05:40:02.378718688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bd985ccf-jxmbn,Uid:030cf2c9-9900-4225-8b2a-d77c13f08480,Namespace:calico-system,Attempt:0,} returns sandbox id \"49ccd0e730f387d9ab4bc64695498e875dd68eefa7cc55fc6772999e86ceed7a\"" Jan 27 05:40:02.380153 containerd[1681]: time="2026-01-27T05:40:02.380133842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 05:40:02.709092 containerd[1681]: time="2026-01-27T05:40:02.708798306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:02.710594 containerd[1681]: time="2026-01-27T05:40:02.710548019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 05:40:02.710752 containerd[1681]: time="2026-01-27T05:40:02.710643573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:02.710876 kubelet[2895]: E0127 05:40:02.710837 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:40:02.711337 kubelet[2895]: E0127 05:40:02.711199 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:40:02.717159 kubelet[2895]: E0127 05:40:02.717095 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:57b1faf7f7fc40c5b00dfb0c507a2180,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdngs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bd985ccf-jxmbn_calico-system(030cf2c9-9900-4225-8b2a-d77c13f08480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:02.719922 containerd[1681]: time="2026-01-27T05:40:02.719686607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 05:40:03.073661 containerd[1681]: time="2026-01-27T05:40:03.073605130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:03.074996 containerd[1681]: time="2026-01-27T05:40:03.074968568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 05:40:03.075079 containerd[1681]: time="2026-01-27T05:40:03.075062503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:03.075257 kubelet[2895]: E0127 05:40:03.075226 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:40:03.075299 kubelet[2895]: E0127 05:40:03.075266 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:40:03.076193 kubelet[2895]: E0127 05:40:03.076122 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdngs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bd985ccf-jxmbn_calico-system(030cf2c9-9900-4225-8b2a-d77c13f08480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:03.077435 kubelet[2895]: E0127 05:40:03.077381 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:40:03.411337 systemd-networkd[1489]: cali4fac0f532e0: Gained IPv6LL Jan 27 05:40:03.467478 kubelet[2895]: I0127 05:40:03.467166 2895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 05:40:03.506623 kubelet[2895]: I0127 05:40:03.506544 2895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94542ac6-892a-4ec1-b71f-6197c63a4798" path="/var/lib/kubelet/pods/94542ac6-892a-4ec1-b71f-6197c63a4798/volumes" Jan 27 05:40:03.514000 audit[4120]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=4120 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:03.514000 audit[4120]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcbb46a040 a2=0 a3=7ffcbb46a02c items=0 ppid=2997 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.514000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:03.518000 audit[4120]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=4120 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:03.518000 audit[4120]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcbb46a040 a2=0 a3=7ffcbb46a02c items=0 ppid=2997 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:03.650330 kubelet[2895]: E0127 05:40:03.650287 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:40:03.682000 audit[4135]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:03.682000 audit[4135]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcedc83850 a2=0 a3=7ffcedc8383c items=0 ppid=2997 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.682000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:03.685000 audit[4135]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:03.685000 audit[4135]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcedc83850 a2=0 a3=0 items=0 ppid=2997 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:03.749000 audit: BPF prog-id=183 op=LOAD Jan 27 05:40:03.749000 audit[4154]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd1255e60 a2=98 a3=1fffffffffffffff items=0 ppid=4121 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.749000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:40:03.749000 audit: BPF prog-id=183 op=UNLOAD Jan 27 05:40:03.749000 audit[4154]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcd1255e30 a3=0 items=0 ppid=4121 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.749000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:40:03.749000 audit: BPF prog-id=184 op=LOAD Jan 27 05:40:03.749000 audit[4154]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd1255d40 a2=94 a3=3 items=0 ppid=4121 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.749000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:40:03.749000 audit: BPF prog-id=184 op=UNLOAD Jan 27 05:40:03.749000 audit[4154]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd1255d40 a2=94 a3=3 items=0 ppid=4121 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.749000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:40:03.749000 audit: BPF prog-id=185 op=LOAD Jan 27 05:40:03.749000 audit[4154]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd1255d80 a2=94 a3=7ffcd1255f60 items=0 ppid=4121 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.749000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:40:03.749000 audit: BPF prog-id=185 op=UNLOAD Jan 27 05:40:03.749000 audit[4154]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd1255d80 a2=94 a3=7ffcd1255f60 items=0 ppid=4121 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.749000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:40:03.751000 audit: BPF prog-id=186 op=LOAD Jan 27 05:40:03.751000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd455f3eb0 a2=98 a3=3 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.751000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.753000 audit: BPF prog-id=186 op=UNLOAD Jan 27 05:40:03.753000 audit[4155]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd455f3e80 a3=0 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.753000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.753000 audit: BPF prog-id=187 op=LOAD Jan 27 05:40:03.753000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd455f3ca0 a2=94 a3=54428f items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.753000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.753000 audit: BPF prog-id=187 op=UNLOAD Jan 27 05:40:03.753000 audit[4155]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd455f3ca0 a2=94 a3=54428f items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.753000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.753000 audit: BPF prog-id=188 op=LOAD Jan 27 05:40:03.753000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd455f3cd0 a2=94 a3=2 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.753000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.753000 audit: BPF prog-id=188 op=UNLOAD Jan 27 05:40:03.753000 audit[4155]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd455f3cd0 a2=0 a3=2 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.753000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.957000 audit: BPF prog-id=189 op=LOAD Jan 27 05:40:03.957000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd455f3b90 a2=94 a3=1 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.957000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.957000 audit: BPF prog-id=189 op=UNLOAD Jan 27 05:40:03.957000 audit[4155]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd455f3b90 a2=94 a3=1 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.957000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.968000 audit: BPF prog-id=190 op=LOAD Jan 27 05:40:03.968000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd455f3b80 a2=94 a3=4 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.968000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.968000 audit: BPF prog-id=190 op=UNLOAD Jan 27 05:40:03.968000 audit[4155]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd455f3b80 a2=0 a3=4 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.968000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.968000 audit: BPF prog-id=191 op=LOAD Jan 27 05:40:03.968000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd455f39e0 a2=94 a3=5 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.968000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.968000 audit: BPF prog-id=191 op=UNLOAD Jan 27 05:40:03.968000 audit[4155]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd455f39e0 a2=0 a3=5 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.968000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.968000 audit: BPF prog-id=192 op=LOAD Jan 27 05:40:03.968000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd455f3c00 a2=94 a3=6 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.968000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.968000 audit: BPF prog-id=192 op=UNLOAD Jan 27 05:40:03.968000 audit[4155]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd455f3c00 a2=0 a3=6 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.968000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.968000 audit: BPF prog-id=193 op=LOAD Jan 27 05:40:03.968000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd455f33b0 a2=94 a3=88 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.968000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.969000 audit: BPF prog-id=194 op=LOAD Jan 27 05:40:03.969000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd455f3230 a2=94 a3=2 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.969000 audit: BPF prog-id=194 op=UNLOAD Jan 27 05:40:03.969000 audit[4155]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd455f3260 a2=0 a3=7ffd455f3360 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.969000 audit: BPF prog-id=193 op=UNLOAD Jan 27 05:40:03.969000 audit[4155]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=bf2cd10 a2=0 a3=f2120bf995bd95c6 items=0 ppid=4121 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.969000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:40:03.978000 audit: BPF prog-id=195 op=LOAD Jan 27 05:40:03.978000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6c3a0b50 a2=98 a3=1999999999999999 items=0 ppid=4121 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:40:03.978000 audit: BPF prog-id=195 op=UNLOAD Jan 27 05:40:03.978000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc6c3a0b20 a3=0 items=0 ppid=4121 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:40:03.978000 audit: BPF prog-id=196 op=LOAD Jan 27 05:40:03.978000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6c3a0a30 a2=94 a3=ffff items=0 ppid=4121 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:40:03.978000 audit: BPF prog-id=196 op=UNLOAD Jan 27 05:40:03.978000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc6c3a0a30 a2=94 a3=ffff items=0 ppid=4121 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:40:03.978000 audit: BPF prog-id=197 op=LOAD Jan 27 05:40:03.978000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6c3a0a70 a2=94 a3=7ffc6c3a0c50 items=0 ppid=4121 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:40:03.978000 audit: BPF prog-id=197 op=UNLOAD Jan 27 05:40:03.978000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc6c3a0a70 a2=94 a3=7ffc6c3a0c50 items=0 ppid=4121 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:03.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:40:04.048627 systemd-networkd[1489]: vxlan.calico: Link UP Jan 27 05:40:04.048635 systemd-networkd[1489]: vxlan.calico: Gained carrier Jan 27 05:40:04.067000 audit: BPF prog-id=198 op=LOAD Jan 27 05:40:04.067000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc7c58d300 a2=98 a3=0 items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.067000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.067000 audit: BPF prog-id=198 op=UNLOAD Jan 27 05:40:04.067000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc7c58d2d0 a3=0 items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.067000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.067000 audit: BPF prog-id=199 op=LOAD Jan 27 05:40:04.067000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc7c58d110 a2=94 a3=54428f items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.067000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.067000 audit: BPF prog-id=199 op=UNLOAD Jan 27 05:40:04.067000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc7c58d110 a2=94 a3=54428f items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.067000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.067000 audit: BPF prog-id=200 op=LOAD Jan 27 05:40:04.067000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc7c58d140 a2=94 a3=2 items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.067000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.067000 audit: BPF prog-id=200 op=UNLOAD Jan 27 05:40:04.067000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc7c58d140 a2=0 a3=2 items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.067000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.067000 audit: BPF prog-id=201 op=LOAD Jan 27 05:40:04.067000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc7c58cef0 a2=94 a3=4 items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.067000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.067000 audit: BPF prog-id=201 op=UNLOAD Jan 27 05:40:04.067000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc7c58cef0 a2=94 a3=4 items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.067000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.067000 audit: BPF prog-id=202 op=LOAD Jan 27 05:40:04.067000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc7c58cff0 a2=94 a3=7ffc7c58d170 items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.067000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.068000 audit: BPF prog-id=202 op=UNLOAD Jan 27 05:40:04.068000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc7c58cff0 a2=0 a3=7ffc7c58d170 items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.068000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.072000 audit: BPF prog-id=203 op=LOAD Jan 27 05:40:04.072000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc7c58c720 a2=94 a3=2 items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.072000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.072000 audit: BPF prog-id=203 op=UNLOAD Jan 27 05:40:04.072000 audit[4203]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc7c58c720 a2=0 a3=2 items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.072000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.072000 audit: BPF prog-id=204 op=LOAD Jan 27 05:40:04.072000 audit[4203]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc7c58c820 a2=94 a3=30 items=0 ppid=4121 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.072000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:40:04.081000 audit: BPF prog-id=205 op=LOAD Jan 27 05:40:04.081000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd5e1764c0 a2=98 a3=0 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.081000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.082000 audit: BPF prog-id=205 op=UNLOAD Jan 27 05:40:04.082000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd5e176490 a3=0 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.082000 audit: BPF prog-id=206 op=LOAD Jan 27 05:40:04.082000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd5e1762b0 a2=94 a3=54428f items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.082000 audit: BPF prog-id=206 op=UNLOAD Jan 27 05:40:04.082000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd5e1762b0 a2=94 a3=54428f items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.082000 audit: BPF prog-id=207 op=LOAD Jan 27 05:40:04.082000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd5e1762e0 a2=94 a3=2 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.082000 audit: BPF prog-id=207 op=UNLOAD Jan 27 05:40:04.082000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd5e1762e0 a2=0 a3=2 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.244000 audit: BPF prog-id=208 op=LOAD Jan 27 05:40:04.244000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd5e1761a0 a2=94 a3=1 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.244000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.245000 audit: BPF prog-id=208 op=UNLOAD Jan 27 05:40:04.245000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd5e1761a0 a2=94 a3=1 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.245000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.255000 audit: BPF prog-id=209 op=LOAD Jan 27 05:40:04.255000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd5e176190 a2=94 a3=4 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.255000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.255000 audit: BPF prog-id=209 op=UNLOAD Jan 27 05:40:04.255000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd5e176190 a2=0 a3=4 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.255000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.255000 audit: BPF prog-id=210 op=LOAD Jan 27 05:40:04.255000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd5e175ff0 a2=94 a3=5 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.255000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.256000 audit: BPF prog-id=210 op=UNLOAD Jan 27 05:40:04.256000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd5e175ff0 a2=0 a3=5 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.256000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.256000 audit: BPF prog-id=211 op=LOAD Jan 27 05:40:04.256000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd5e176210 a2=94 a3=6 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.256000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.256000 audit: BPF prog-id=211 op=UNLOAD Jan 27 05:40:04.256000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd5e176210 a2=0 a3=6 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.256000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.256000 audit: BPF prog-id=212 op=LOAD Jan 27 05:40:04.256000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd5e1759c0 a2=94 a3=88 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.256000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.256000 audit: BPF prog-id=213 op=LOAD Jan 27 05:40:04.256000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd5e175840 a2=94 a3=2 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.256000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.256000 audit: BPF prog-id=213 op=UNLOAD Jan 27 05:40:04.256000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd5e175870 a2=0 a3=7ffd5e175970 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.256000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.257000 audit: BPF prog-id=212 op=UNLOAD Jan 27 05:40:04.257000 audit[4209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2023fd10 a2=0 a3=23432b3ea0e549b8 items=0 ppid=4121 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.257000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:40:04.262000 audit: BPF prog-id=204 op=UNLOAD Jan 27 05:40:04.262000 audit[4121]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0009523c0 a2=0 a3=0 items=0 ppid=4027 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.262000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 27 05:40:04.311000 audit[4231]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4231 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:04.311000 audit[4231]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffda25d0590 a2=0 a3=7ffda25d057c items=0 ppid=4121 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.311000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:04.314000 audit[4234]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4234 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:04.314000 audit[4234]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffcf83a94e0 a2=0 a3=7ffcf83a94cc items=0 ppid=4121 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.314000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:04.319000 audit[4230]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4230 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:04.319000 audit[4230]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe2bd4f860 a2=0 a3=7ffe2bd4f84c items=0 ppid=4121 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.319000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:04.324000 audit[4232]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:04.324000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe521d9000 a2=0 a3=7ffe521d8fec items=0 ppid=4121 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:04.324000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:04.852971 kubelet[2895]: I0127 05:40:04.852711 2895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 05:40:05.202282 systemd-networkd[1489]: vxlan.calico: Gained IPv6LL Jan 27 05:40:05.506344 containerd[1681]: time="2026-01-27T05:40:05.506293626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2xtdt,Uid:e28c560f-1255-4106-9071-95e1b99b107b,Namespace:kube-system,Attempt:0,}" Jan 27 05:40:05.610931 systemd-networkd[1489]: calif8ac81bd144: Link UP Jan 27 05:40:05.611613 systemd-networkd[1489]: calif8ac81bd144: Gained carrier Jan 27 05:40:05.625942 containerd[1681]: 2026-01-27 05:40:05.547 [INFO][4300] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0 coredns-668d6bf9bc- kube-system e28c560f-1255-4106-9071-95e1b99b107b 798 0 2026-01-27 05:39:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4592-0-0-n-eb4c5d05b1 coredns-668d6bf9bc-2xtdt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif8ac81bd144 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" Namespace="kube-system" Pod="coredns-668d6bf9bc-2xtdt" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-" Jan 27 05:40:05.625942 containerd[1681]: 2026-01-27 05:40:05.547 [INFO][4300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" Namespace="kube-system" Pod="coredns-668d6bf9bc-2xtdt" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0" Jan 27 05:40:05.625942 containerd[1681]: 2026-01-27 05:40:05.572 [INFO][4312] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" HandleID="k8s-pod-network.4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0" Jan 27 05:40:05.626180 containerd[1681]: 2026-01-27 05:40:05.572 [INFO][4312] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" HandleID="k8s-pod-network.4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5230), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4592-0-0-n-eb4c5d05b1", "pod":"coredns-668d6bf9bc-2xtdt", "timestamp":"2026-01-27 05:40:05.572015859 +0000 UTC"}, Hostname:"ci-4592-0-0-n-eb4c5d05b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:40:05.626180 containerd[1681]: 2026-01-27 05:40:05.572 [INFO][4312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:40:05.626180 containerd[1681]: 2026-01-27 05:40:05.572 [INFO][4312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:40:05.626180 containerd[1681]: 2026-01-27 05:40:05.572 [INFO][4312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-eb4c5d05b1' Jan 27 05:40:05.626180 containerd[1681]: 2026-01-27 05:40:05.582 [INFO][4312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:05.626180 containerd[1681]: 2026-01-27 05:40:05.586 [INFO][4312] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:05.626180 containerd[1681]: 2026-01-27 05:40:05.590 [INFO][4312] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:05.626180 containerd[1681]: 2026-01-27 05:40:05.591 [INFO][4312] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:05.626180 containerd[1681]: 2026-01-27 05:40:05.593 [INFO][4312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:05.626668 containerd[1681]: 2026-01-27 05:40:05.593 [INFO][4312] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:05.626668 containerd[1681]: 2026-01-27 05:40:05.594 [INFO][4312] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576 Jan 27 05:40:05.626668 containerd[1681]: 2026-01-27 05:40:05.598 [INFO][4312] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:05.626668 containerd[1681]: 2026-01-27 05:40:05.606 [INFO][4312] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.66/26] block=192.168.51.64/26 handle="k8s-pod-network.4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:05.626668 containerd[1681]: 2026-01-27 05:40:05.606 [INFO][4312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.66/26] handle="k8s-pod-network.4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:05.626668 containerd[1681]: 2026-01-27 05:40:05.606 [INFO][4312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:40:05.626668 containerd[1681]: 2026-01-27 05:40:05.606 [INFO][4312] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.66/26] IPv6=[] ContainerID="4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" HandleID="k8s-pod-network.4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0" Jan 27 05:40:05.626848 containerd[1681]: 2026-01-27 05:40:05.608 [INFO][4300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" Namespace="kube-system" Pod="coredns-668d6bf9bc-2xtdt" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e28c560f-1255-4106-9071-95e1b99b107b", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"", Pod:"coredns-668d6bf9bc-2xtdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8ac81bd144", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:05.626848 containerd[1681]: 2026-01-27 05:40:05.608 [INFO][4300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.66/32] ContainerID="4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" Namespace="kube-system" Pod="coredns-668d6bf9bc-2xtdt" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0" Jan 27 05:40:05.626848 containerd[1681]: 2026-01-27 05:40:05.608 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8ac81bd144 ContainerID="4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" Namespace="kube-system" Pod="coredns-668d6bf9bc-2xtdt" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0" Jan 27 05:40:05.626848 containerd[1681]: 2026-01-27 05:40:05.611 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" Namespace="kube-system" Pod="coredns-668d6bf9bc-2xtdt" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0" Jan 27 05:40:05.626848 containerd[1681]: 2026-01-27 05:40:05.611 [INFO][4300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" Namespace="kube-system" Pod="coredns-668d6bf9bc-2xtdt" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e28c560f-1255-4106-9071-95e1b99b107b", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576", Pod:"coredns-668d6bf9bc-2xtdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8ac81bd144", MAC:"ce:6c:96:e5:ad:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:05.626848 containerd[1681]: 2026-01-27 05:40:05.622 [INFO][4300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" Namespace="kube-system" Pod="coredns-668d6bf9bc-2xtdt" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--2xtdt-eth0" Jan 27 05:40:05.639000 audit[4326]: NETFILTER_CFG table=filter:125 family=2 entries=42 op=nft_register_chain pid=4326 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:05.639000 audit[4326]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffcd47f4590 a2=0 a3=7ffcd47f457c items=0 ppid=4121 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.639000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:05.658508 containerd[1681]: time="2026-01-27T05:40:05.658423176Z" level=info msg="connecting to shim 4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576" address="unix:///run/containerd/s/72b06e4067ba9276d56a80529aa88262fe1686cd543dbf5bcbf3a9fb35ddf190" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:05.697307 systemd[1]: Started cri-containerd-4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576.scope - libcontainer container 4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576. Jan 27 05:40:05.708000 audit: BPF prog-id=214 op=LOAD Jan 27 05:40:05.708000 audit: BPF prog-id=215 op=LOAD Jan 27 05:40:05.708000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333639633038316330393032633433363430636466303234303530 Jan 27 05:40:05.708000 audit: BPF prog-id=215 op=UNLOAD Jan 27 05:40:05.708000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333639633038316330393032633433363430636466303234303530 Jan 27 05:40:05.709000 audit: BPF prog-id=216 op=LOAD Jan 27 05:40:05.709000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333639633038316330393032633433363430636466303234303530 Jan 27 05:40:05.709000 audit: BPF prog-id=217 op=LOAD Jan 27 05:40:05.709000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333639633038316330393032633433363430636466303234303530 Jan 27 05:40:05.709000 audit: BPF prog-id=217 op=UNLOAD Jan 27 05:40:05.709000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333639633038316330393032633433363430636466303234303530 Jan 27 05:40:05.710000 audit: BPF prog-id=216 op=UNLOAD Jan 27 05:40:05.710000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333639633038316330393032633433363430636466303234303530 Jan 27 05:40:05.710000 audit: BPF prog-id=218 op=LOAD Jan 27 05:40:05.710000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4335 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333639633038316330393032633433363430636466303234303530 Jan 27 05:40:05.747789 containerd[1681]: time="2026-01-27T05:40:05.747738023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2xtdt,Uid:e28c560f-1255-4106-9071-95e1b99b107b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576\"" Jan 27 05:40:05.750758 containerd[1681]: time="2026-01-27T05:40:05.750674972Z" level=info msg="CreateContainer within sandbox \"4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 27 05:40:05.768187 containerd[1681]: time="2026-01-27T05:40:05.767595083Z" level=info msg="Container c52759cfa003b7473686051457fe1c4a1d1f19f8f583c4e38a36818d18558559: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:40:05.778544 containerd[1681]: time="2026-01-27T05:40:05.778498154Z" level=info msg="CreateContainer within sandbox \"4b369c081c0902c43640cdf024050a42bed3c020f8202a88491cdd3efd4d6576\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c52759cfa003b7473686051457fe1c4a1d1f19f8f583c4e38a36818d18558559\"" Jan 27 05:40:05.780057 containerd[1681]: time="2026-01-27T05:40:05.780020356Z" level=info msg="StartContainer for \"c52759cfa003b7473686051457fe1c4a1d1f19f8f583c4e38a36818d18558559\"" Jan 27 05:40:05.780844 containerd[1681]: time="2026-01-27T05:40:05.780803836Z" level=info msg="connecting to shim c52759cfa003b7473686051457fe1c4a1d1f19f8f583c4e38a36818d18558559" address="unix:///run/containerd/s/72b06e4067ba9276d56a80529aa88262fe1686cd543dbf5bcbf3a9fb35ddf190" protocol=ttrpc version=3 Jan 27 05:40:05.800217 systemd[1]: Started cri-containerd-c52759cfa003b7473686051457fe1c4a1d1f19f8f583c4e38a36818d18558559.scope - libcontainer container c52759cfa003b7473686051457fe1c4a1d1f19f8f583c4e38a36818d18558559. Jan 27 05:40:05.811000 audit: BPF prog-id=219 op=LOAD Jan 27 05:40:05.812000 audit: BPF prog-id=220 op=LOAD Jan 27 05:40:05.812000 audit[4372]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4335 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335323735396366613030336237343733363836303531343537666531 Jan 27 05:40:05.812000 audit: BPF prog-id=220 op=UNLOAD Jan 27 05:40:05.812000 audit[4372]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4335 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335323735396366613030336237343733363836303531343537666531 Jan 27 05:40:05.812000 audit: BPF prog-id=221 op=LOAD Jan 27 05:40:05.812000 audit[4372]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4335 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335323735396366613030336237343733363836303531343537666531 Jan 27 05:40:05.812000 audit: BPF prog-id=222 op=LOAD Jan 27 05:40:05.812000 audit[4372]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4335 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335323735396366613030336237343733363836303531343537666531 Jan 27 05:40:05.812000 audit: BPF prog-id=222 op=UNLOAD Jan 27 05:40:05.812000 audit[4372]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4335 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335323735396366613030336237343733363836303531343537666531 Jan 27 05:40:05.812000 audit: BPF prog-id=221 op=UNLOAD Jan 27 05:40:05.812000 audit[4372]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4335 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335323735396366613030336237343733363836303531343537666531 Jan 27 05:40:05.812000 audit: BPF prog-id=223 op=LOAD Jan 27 05:40:05.812000 audit[4372]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4335 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:05.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335323735396366613030336237343733363836303531343537666531 Jan 27 05:40:05.830632 containerd[1681]: time="2026-01-27T05:40:05.829991025Z" level=info msg="StartContainer for \"c52759cfa003b7473686051457fe1c4a1d1f19f8f583c4e38a36818d18558559\" returns successfully" Jan 27 05:40:06.504116 containerd[1681]: time="2026-01-27T05:40:06.504081668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68f86b6c77-cbrw7,Uid:cf580b0a-7ab1-4b43-ad9f-7219ad766e09,Namespace:calico-system,Attempt:0,}" Jan 27 05:40:06.597452 systemd-networkd[1489]: calid30295fa7e8: Link UP Jan 27 05:40:06.598276 systemd-networkd[1489]: calid30295fa7e8: Gained carrier Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.542 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0 calico-kube-controllers-68f86b6c77- calico-system cf580b0a-7ab1-4b43-ad9f-7219ad766e09 797 0 2026-01-27 05:39:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68f86b6c77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4592-0-0-n-eb4c5d05b1 calico-kube-controllers-68f86b6c77-cbrw7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid30295fa7e8 [] [] }} ContainerID="d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" Namespace="calico-system" Pod="calico-kube-controllers-68f86b6c77-cbrw7" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.542 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" Namespace="calico-system" Pod="calico-kube-controllers-68f86b6c77-cbrw7" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.564 [INFO][4415] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" HandleID="k8s-pod-network.d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.564 [INFO][4415] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" HandleID="k8s-pod-network.d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4592-0-0-n-eb4c5d05b1", "pod":"calico-kube-controllers-68f86b6c77-cbrw7", "timestamp":"2026-01-27 05:40:06.564859874 +0000 UTC"}, Hostname:"ci-4592-0-0-n-eb4c5d05b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.565 [INFO][4415] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.565 [INFO][4415] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.565 [INFO][4415] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-eb4c5d05b1' Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.571 [INFO][4415] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.575 [INFO][4415] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.578 [INFO][4415] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.580 [INFO][4415] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.582 [INFO][4415] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.582 [INFO][4415] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.583 [INFO][4415] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62 Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.587 [INFO][4415] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.593 [INFO][4415] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.67/26] block=192.168.51.64/26 handle="k8s-pod-network.d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.593 [INFO][4415] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.67/26] handle="k8s-pod-network.d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.593 [INFO][4415] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:40:06.614789 containerd[1681]: 2026-01-27 05:40:06.593 [INFO][4415] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.67/26] IPv6=[] ContainerID="d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" HandleID="k8s-pod-network.d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0" Jan 27 05:40:06.616841 containerd[1681]: 2026-01-27 05:40:06.595 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" Namespace="calico-system" Pod="calico-kube-controllers-68f86b6c77-cbrw7" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0", GenerateName:"calico-kube-controllers-68f86b6c77-", Namespace:"calico-system", SelfLink:"", UID:"cf580b0a-7ab1-4b43-ad9f-7219ad766e09", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68f86b6c77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"", Pod:"calico-kube-controllers-68f86b6c77-cbrw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid30295fa7e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:06.616841 containerd[1681]: 2026-01-27 05:40:06.595 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.67/32] ContainerID="d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" Namespace="calico-system" Pod="calico-kube-controllers-68f86b6c77-cbrw7" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0" Jan 27 05:40:06.616841 containerd[1681]: 2026-01-27 05:40:06.595 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid30295fa7e8 ContainerID="d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" Namespace="calico-system" Pod="calico-kube-controllers-68f86b6c77-cbrw7" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0" Jan 27 05:40:06.616841 containerd[1681]: 2026-01-27 05:40:06.600 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" Namespace="calico-system" Pod="calico-kube-controllers-68f86b6c77-cbrw7" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0" Jan 27 05:40:06.616841 containerd[1681]: 2026-01-27 05:40:06.601 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" Namespace="calico-system" Pod="calico-kube-controllers-68f86b6c77-cbrw7" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0", GenerateName:"calico-kube-controllers-68f86b6c77-", Namespace:"calico-system", SelfLink:"", UID:"cf580b0a-7ab1-4b43-ad9f-7219ad766e09", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68f86b6c77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62", Pod:"calico-kube-controllers-68f86b6c77-cbrw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid30295fa7e8", MAC:"fe:40:79:4c:d8:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:06.616841 containerd[1681]: 2026-01-27 05:40:06.612 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" Namespace="calico-system" Pod="calico-kube-controllers-68f86b6c77-cbrw7" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--kube--controllers--68f86b6c77--cbrw7-eth0" Jan 27 05:40:06.626000 audit[4429]: NETFILTER_CFG table=filter:126 family=2 entries=40 op=nft_register_chain pid=4429 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:06.628308 kernel: kauditd_printk_skb: 284 callbacks suppressed Jan 27 05:40:06.628345 kernel: audit: type=1325 audit(1769492406.626:677): table=filter:126 family=2 entries=40 op=nft_register_chain pid=4429 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:06.626000 audit[4429]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffe19ca4d50 a2=0 a3=7ffe19ca4d3c items=0 ppid=4121 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.634686 kernel: audit: type=1300 audit(1769492406.626:677): arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffe19ca4d50 a2=0 a3=7ffe19ca4d3c items=0 ppid=4121 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.634732 kernel: audit: type=1327 audit(1769492406.626:677): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:06.626000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:06.656056 containerd[1681]: time="2026-01-27T05:40:06.655746117Z" level=info msg="connecting to shim d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62" address="unix:///run/containerd/s/93d503540565d4c4c2265b08d8ffa7f41b8880f66c3cd10cb5771ce25480c574" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:06.672327 kubelet[2895]: I0127 05:40:06.672282 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2xtdt" podStartSLOduration=36.672266441 podStartE2EDuration="36.672266441s" podCreationTimestamp="2026-01-27 05:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:40:06.670611299 +0000 UTC m=+41.265109850" watchObservedRunningTime="2026-01-27 05:40:06.672266441 +0000 UTC m=+41.266764993" Jan 27 05:40:06.695223 systemd[1]: Started cri-containerd-d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62.scope - libcontainer container d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62. Jan 27 05:40:06.703000 audit[4465]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:06.707124 kernel: audit: type=1325 audit(1769492406.703:678): table=filter:127 family=2 entries=20 op=nft_register_rule pid=4465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:06.703000 audit[4465]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffea4f1fd70 a2=0 a3=7ffea4f1fd5c items=0 ppid=2997 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.712056 kernel: audit: type=1300 audit(1769492406.703:678): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffea4f1fd70 a2=0 a3=7ffea4f1fd5c items=0 ppid=2997 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:06.706000 audit[4465]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:06.716642 kernel: audit: type=1327 audit(1769492406.703:678): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:06.716689 kernel: audit: type=1325 audit(1769492406.706:679): table=nat:128 family=2 entries=14 op=nft_register_rule pid=4465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:06.706000 audit[4465]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffea4f1fd70 a2=0 a3=0 items=0 ppid=2997 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.723066 kernel: audit: type=1300 audit(1769492406.706:679): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffea4f1fd70 a2=0 a3=0 items=0 ppid=2997 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.706000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:06.727052 kernel: audit: type=1327 audit(1769492406.706:679): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:06.731000 audit: BPF prog-id=224 op=LOAD Jan 27 05:40:06.734148 kernel: audit: type=1334 audit(1769492406.731:680): prog-id=224 op=LOAD Jan 27 05:40:06.731000 audit: BPF prog-id=225 op=LOAD Jan 27 05:40:06.731000 audit[4450]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4439 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376638373230373863323061633034663364626532353435393834 Jan 27 05:40:06.731000 audit: BPF prog-id=225 op=UNLOAD Jan 27 05:40:06.731000 audit[4450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4439 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376638373230373863323061633034663364626532353435393834 Jan 27 05:40:06.731000 audit: BPF prog-id=226 op=LOAD Jan 27 05:40:06.731000 audit[4450]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4439 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376638373230373863323061633034663364626532353435393834 Jan 27 05:40:06.731000 audit: BPF prog-id=227 op=LOAD Jan 27 05:40:06.731000 audit[4450]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4439 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376638373230373863323061633034663364626532353435393834 Jan 27 05:40:06.731000 audit: BPF prog-id=227 op=UNLOAD Jan 27 05:40:06.731000 audit[4450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4439 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376638373230373863323061633034663364626532353435393834 Jan 27 05:40:06.731000 audit: BPF prog-id=226 op=UNLOAD Jan 27 05:40:06.731000 audit[4450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4439 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376638373230373863323061633034663364626532353435393834 Jan 27 05:40:06.731000 audit: BPF prog-id=228 op=LOAD Jan 27 05:40:06.731000 audit[4450]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4439 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376638373230373863323061633034663364626532353435393834 Jan 27 05:40:06.737000 audit[4472]: NETFILTER_CFG table=filter:129 family=2 entries=17 op=nft_register_rule pid=4472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:06.737000 audit[4472]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed6e8ee90 a2=0 a3=7ffed6e8ee7c items=0 ppid=2997 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:06.738000 audit[4472]: NETFILTER_CFG table=nat:130 family=2 entries=35 op=nft_register_chain pid=4472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:06.738000 audit[4472]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffed6e8ee90 a2=0 a3=7ffed6e8ee7c items=0 ppid=2997 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:06.738000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:06.771575 containerd[1681]: time="2026-01-27T05:40:06.771488888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68f86b6c77-cbrw7,Uid:cf580b0a-7ab1-4b43-ad9f-7219ad766e09,Namespace:calico-system,Attempt:0,} returns sandbox id \"d97f872078c20ac04f3dbe2545984d237f9229e5f41d8546be55b48685490e62\"" Jan 27 05:40:06.773895 containerd[1681]: time="2026-01-27T05:40:06.773851208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 05:40:07.111597 containerd[1681]: time="2026-01-27T05:40:07.111469995Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:07.116383 containerd[1681]: time="2026-01-27T05:40:07.116336009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:07.116520 containerd[1681]: time="2026-01-27T05:40:07.116393065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 05:40:07.116935 kubelet[2895]: E0127 05:40:07.116677 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:40:07.116935 kubelet[2895]: E0127 05:40:07.116746 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:40:07.116935 kubelet[2895]: E0127 05:40:07.116878 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jq9j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68f86b6c77-cbrw7_calico-system(cf580b0a-7ab1-4b43-ad9f-7219ad766e09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:07.118202 kubelet[2895]: E0127 05:40:07.118170 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:40:07.505127 containerd[1681]: time="2026-01-27T05:40:07.504990313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrrrf,Uid:0babb8ec-614b-4d27-a5c4-cb80e7016a50,Namespace:kube-system,Attempt:0,}" Jan 27 05:40:07.505371 containerd[1681]: time="2026-01-27T05:40:07.505279928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mf6bj,Uid:7ea10135-90f4-4815-b58a-eefd271d18ce,Namespace:calico-system,Attempt:0,}" Jan 27 05:40:07.570438 systemd-networkd[1489]: calif8ac81bd144: Gained IPv6LL Jan 27 05:40:07.633203 systemd-networkd[1489]: calid30295fa7e8: Gained IPv6LL Jan 27 05:40:07.649477 systemd-networkd[1489]: cali60ccf7a2d1c: Link UP Jan 27 05:40:07.650127 systemd-networkd[1489]: cali60ccf7a2d1c: Gained carrier Jan 27 05:40:07.661953 kubelet[2895]: E0127 05:40:07.661909 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.555 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0 csi-node-driver- calico-system 7ea10135-90f4-4815-b58a-eefd271d18ce 691 0 2026-01-27 05:39:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4592-0-0-n-eb4c5d05b1 csi-node-driver-mf6bj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali60ccf7a2d1c [] [] }} ContainerID="1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" Namespace="calico-system" Pod="csi-node-driver-mf6bj" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.555 [INFO][4483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" Namespace="calico-system" Pod="csi-node-driver-mf6bj" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.600 [INFO][4506] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" HandleID="k8s-pod-network.1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.600 [INFO][4506] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" HandleID="k8s-pod-network.1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eb30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4592-0-0-n-eb4c5d05b1", "pod":"csi-node-driver-mf6bj", "timestamp":"2026-01-27 05:40:07.600523648 +0000 UTC"}, Hostname:"ci-4592-0-0-n-eb4c5d05b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.600 [INFO][4506] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.600 [INFO][4506] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.600 [INFO][4506] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-eb4c5d05b1' Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.611 [INFO][4506] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.616 [INFO][4506] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.620 [INFO][4506] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.622 [INFO][4506] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.624 [INFO][4506] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.624 [INFO][4506] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.626 [INFO][4506] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572 Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.630 [INFO][4506] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.637 [INFO][4506] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.68/26] block=192.168.51.64/26 handle="k8s-pod-network.1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.637 [INFO][4506] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.68/26] handle="k8s-pod-network.1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.638 [INFO][4506] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:40:07.672655 containerd[1681]: 2026-01-27 05:40:07.638 [INFO][4506] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.68/26] IPv6=[] ContainerID="1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" HandleID="k8s-pod-network.1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0" Jan 27 05:40:07.673478 containerd[1681]: 2026-01-27 05:40:07.641 [INFO][4483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" Namespace="calico-system" Pod="csi-node-driver-mf6bj" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ea10135-90f4-4815-b58a-eefd271d18ce", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"", Pod:"csi-node-driver-mf6bj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali60ccf7a2d1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:07.673478 containerd[1681]: 2026-01-27 05:40:07.641 [INFO][4483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.68/32] ContainerID="1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" Namespace="calico-system" Pod="csi-node-driver-mf6bj" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0" Jan 27 05:40:07.673478 containerd[1681]: 2026-01-27 05:40:07.641 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60ccf7a2d1c ContainerID="1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" Namespace="calico-system" Pod="csi-node-driver-mf6bj" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0" Jan 27 05:40:07.673478 containerd[1681]: 2026-01-27 05:40:07.650 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" Namespace="calico-system" Pod="csi-node-driver-mf6bj" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0" Jan 27 05:40:07.673478 containerd[1681]: 2026-01-27 05:40:07.651 [INFO][4483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" Namespace="calico-system" Pod="csi-node-driver-mf6bj" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ea10135-90f4-4815-b58a-eefd271d18ce", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572", Pod:"csi-node-driver-mf6bj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali60ccf7a2d1c", MAC:"ea:54:cb:90:81:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:07.673478 containerd[1681]: 2026-01-27 05:40:07.669 [INFO][4483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" Namespace="calico-system" Pod="csi-node-driver-mf6bj" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-csi--node--driver--mf6bj-eth0" Jan 27 05:40:07.685000 audit[4526]: NETFILTER_CFG table=filter:131 family=2 entries=44 op=nft_register_chain pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:07.685000 audit[4526]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffc026a55e0 a2=0 a3=7ffc026a55cc items=0 ppid=4121 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:07.685000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:07.747294 systemd-networkd[1489]: cali29b4fe748bf: Link UP Jan 27 05:40:07.747525 systemd-networkd[1489]: cali29b4fe748bf: Gained carrier Jan 27 05:40:07.864000 audit[4536]: NETFILTER_CFG table=filter:132 family=2 entries=44 op=nft_register_chain pid=4536 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:07.864000 audit[4536]: SYSCALL arch=c000003e syscall=46 success=yes exit=21532 a0=3 a1=7ffdec0085d0 a2=0 a3=7ffdec0085bc items=0 ppid=4121 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:07.864000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.556 [INFO][4481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0 coredns-668d6bf9bc- kube-system 0babb8ec-614b-4d27-a5c4-cb80e7016a50 788 0 2026-01-27 05:39:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4592-0-0-n-eb4c5d05b1 coredns-668d6bf9bc-nrrrf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali29b4fe748bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrrrf" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.556 [INFO][4481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrrrf" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.606 [INFO][4505] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" HandleID="k8s-pod-network.44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.607 [INFO][4505] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" HandleID="k8s-pod-network.44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cef50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4592-0-0-n-eb4c5d05b1", "pod":"coredns-668d6bf9bc-nrrrf", "timestamp":"2026-01-27 05:40:07.606817323 +0000 UTC"}, Hostname:"ci-4592-0-0-n-eb4c5d05b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.607 [INFO][4505] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.638 [INFO][4505] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.638 [INFO][4505] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-eb4c5d05b1' Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.712 [INFO][4505] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.718 [INFO][4505] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.722 [INFO][4505] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.723 [INFO][4505] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.726 [INFO][4505] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.726 [INFO][4505] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.728 [INFO][4505] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675 Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.732 [INFO][4505] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.741 [INFO][4505] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.69/26] block=192.168.51.64/26 handle="k8s-pod-network.44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.741 [INFO][4505] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.69/26] handle="k8s-pod-network.44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.742 [INFO][4505] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:40:07.916221 containerd[1681]: 2026-01-27 05:40:07.742 [INFO][4505] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.69/26] IPv6=[] ContainerID="44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" HandleID="k8s-pod-network.44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0" Jan 27 05:40:07.917112 containerd[1681]: 2026-01-27 05:40:07.743 [INFO][4481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrrrf" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0babb8ec-614b-4d27-a5c4-cb80e7016a50", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"", Pod:"coredns-668d6bf9bc-nrrrf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29b4fe748bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:07.917112 containerd[1681]: 2026-01-27 05:40:07.743 [INFO][4481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.69/32] ContainerID="44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrrrf" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0" Jan 27 05:40:07.917112 containerd[1681]: 2026-01-27 05:40:07.743 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29b4fe748bf ContainerID="44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrrrf" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0" Jan 27 05:40:07.917112 containerd[1681]: 2026-01-27 05:40:07.745 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrrrf" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0" Jan 27 05:40:07.917112 containerd[1681]: 2026-01-27 05:40:07.745 [INFO][4481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrrrf" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0babb8ec-614b-4d27-a5c4-cb80e7016a50", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675", Pod:"coredns-668d6bf9bc-nrrrf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29b4fe748bf", MAC:"32:04:42:ad:65:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:07.917112 containerd[1681]: 2026-01-27 05:40:07.757 [INFO][4481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" Namespace="kube-system" Pod="coredns-668d6bf9bc-nrrrf" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-coredns--668d6bf9bc--nrrrf-eth0" Jan 27 05:40:08.179364 containerd[1681]: time="2026-01-27T05:40:08.179150918Z" level=info msg="connecting to shim 1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572" address="unix:///run/containerd/s/1a83bd0e1c0c93871cbb5bf09cd6a2fde02abb10ac0870e5e76de80b54128c76" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:08.213980 containerd[1681]: time="2026-01-27T05:40:08.213931027Z" level=info msg="connecting to shim 44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675" address="unix:///run/containerd/s/90c4aff2b479411964ed9ade4a6bf89ba7da9f864503d1f992e73bf7a2091aaf" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:08.217489 systemd[1]: Started cri-containerd-1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572.scope - libcontainer container 1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572. Jan 27 05:40:08.232000 audit: BPF prog-id=229 op=LOAD Jan 27 05:40:08.232000 audit: BPF prog-id=230 op=LOAD Jan 27 05:40:08.232000 audit[4555]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4545 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162623562663665313030666265333035643166663263616165376161 Jan 27 05:40:08.232000 audit: BPF prog-id=230 op=UNLOAD Jan 27 05:40:08.232000 audit[4555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4545 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162623562663665313030666265333035643166663263616165376161 Jan 27 05:40:08.232000 audit: BPF prog-id=231 op=LOAD Jan 27 05:40:08.232000 audit[4555]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4545 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162623562663665313030666265333035643166663263616165376161 Jan 27 05:40:08.232000 audit: BPF prog-id=232 op=LOAD Jan 27 05:40:08.232000 audit[4555]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4545 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162623562663665313030666265333035643166663263616165376161 Jan 27 05:40:08.233000 audit: BPF prog-id=232 op=UNLOAD Jan 27 05:40:08.233000 audit[4555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4545 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162623562663665313030666265333035643166663263616165376161 Jan 27 05:40:08.233000 audit: BPF prog-id=231 op=UNLOAD Jan 27 05:40:08.233000 audit[4555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4545 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162623562663665313030666265333035643166663263616165376161 Jan 27 05:40:08.233000 audit: BPF prog-id=233 op=LOAD Jan 27 05:40:08.233000 audit[4555]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4545 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162623562663665313030666265333035643166663263616165376161 Jan 27 05:40:08.254477 systemd[1]: Started cri-containerd-44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675.scope - libcontainer container 44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675. Jan 27 05:40:08.259052 containerd[1681]: time="2026-01-27T05:40:08.258983622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mf6bj,Uid:7ea10135-90f4-4815-b58a-eefd271d18ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bb5bf6e100fbe305d1ff2caae7aa7aeea7de61a23e893d071d99f520524a572\"" Jan 27 05:40:08.260986 containerd[1681]: time="2026-01-27T05:40:08.260955691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 05:40:08.269000 audit: BPF prog-id=234 op=LOAD Jan 27 05:40:08.269000 audit: BPF prog-id=235 op=LOAD Jan 27 05:40:08.269000 audit[4593]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4575 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434613737366163633336303736333563343034343538393662643137 Jan 27 05:40:08.269000 audit: BPF prog-id=235 op=UNLOAD Jan 27 05:40:08.269000 audit[4593]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4575 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434613737366163633336303736333563343034343538393662643137 Jan 27 05:40:08.270000 audit: BPF prog-id=236 op=LOAD Jan 27 05:40:08.270000 audit[4593]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4575 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434613737366163633336303736333563343034343538393662643137 Jan 27 05:40:08.270000 audit: BPF prog-id=237 op=LOAD Jan 27 05:40:08.270000 audit[4593]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4575 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434613737366163633336303736333563343034343538393662643137 Jan 27 05:40:08.270000 audit: BPF prog-id=237 op=UNLOAD Jan 27 05:40:08.270000 audit[4593]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4575 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434613737366163633336303736333563343034343538393662643137 Jan 27 05:40:08.270000 audit: BPF prog-id=236 op=UNLOAD Jan 27 05:40:08.270000 audit[4593]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4575 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434613737366163633336303736333563343034343538393662643137 Jan 27 05:40:08.270000 audit: BPF prog-id=238 op=LOAD Jan 27 05:40:08.270000 audit[4593]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4575 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434613737366163633336303736333563343034343538393662643137 Jan 27 05:40:08.307568 containerd[1681]: time="2026-01-27T05:40:08.307525215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrrrf,Uid:0babb8ec-614b-4d27-a5c4-cb80e7016a50,Namespace:kube-system,Attempt:0,} returns sandbox id \"44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675\"" Jan 27 05:40:08.311042 containerd[1681]: time="2026-01-27T05:40:08.310993245Z" level=info msg="CreateContainer within sandbox \"44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 27 05:40:08.323541 containerd[1681]: time="2026-01-27T05:40:08.322953055Z" level=info msg="Container 6cd83658af5091fbd0a222e519440a429ead7d11abfffce20a7b765150a5999c: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:40:08.329492 containerd[1681]: time="2026-01-27T05:40:08.329461990Z" level=info msg="CreateContainer within sandbox \"44a776acc3607635c40445896bd17774e41878c38b144b30d62b28cfd8f78675\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6cd83658af5091fbd0a222e519440a429ead7d11abfffce20a7b765150a5999c\"" Jan 27 05:40:08.330325 containerd[1681]: time="2026-01-27T05:40:08.330304952Z" level=info msg="StartContainer for \"6cd83658af5091fbd0a222e519440a429ead7d11abfffce20a7b765150a5999c\"" Jan 27 05:40:08.331805 containerd[1681]: time="2026-01-27T05:40:08.331782633Z" level=info msg="connecting to shim 6cd83658af5091fbd0a222e519440a429ead7d11abfffce20a7b765150a5999c" address="unix:///run/containerd/s/90c4aff2b479411964ed9ade4a6bf89ba7da9f864503d1f992e73bf7a2091aaf" protocol=ttrpc version=3 Jan 27 05:40:08.353340 systemd[1]: Started cri-containerd-6cd83658af5091fbd0a222e519440a429ead7d11abfffce20a7b765150a5999c.scope - libcontainer container 6cd83658af5091fbd0a222e519440a429ead7d11abfffce20a7b765150a5999c. Jan 27 05:40:08.364000 audit: BPF prog-id=239 op=LOAD Jan 27 05:40:08.365000 audit: BPF prog-id=240 op=LOAD Jan 27 05:40:08.365000 audit[4625]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4575 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643833363538616635303931666264306132323265353139343430 Jan 27 05:40:08.365000 audit: BPF prog-id=240 op=UNLOAD Jan 27 05:40:08.365000 audit[4625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4575 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643833363538616635303931666264306132323265353139343430 Jan 27 05:40:08.365000 audit: BPF prog-id=241 op=LOAD Jan 27 05:40:08.365000 audit[4625]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4575 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643833363538616635303931666264306132323265353139343430 Jan 27 05:40:08.365000 audit: BPF prog-id=242 op=LOAD Jan 27 05:40:08.365000 audit[4625]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4575 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643833363538616635303931666264306132323265353139343430 Jan 27 05:40:08.365000 audit: BPF prog-id=242 op=UNLOAD Jan 27 05:40:08.365000 audit[4625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4575 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643833363538616635303931666264306132323265353139343430 Jan 27 05:40:08.365000 audit: BPF prog-id=241 op=UNLOAD Jan 27 05:40:08.365000 audit[4625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4575 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643833363538616635303931666264306132323265353139343430 Jan 27 05:40:08.365000 audit: BPF prog-id=243 op=LOAD Jan 27 05:40:08.365000 audit[4625]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4575 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643833363538616635303931666264306132323265353139343430 Jan 27 05:40:08.383881 containerd[1681]: time="2026-01-27T05:40:08.383828892Z" level=info msg="StartContainer for \"6cd83658af5091fbd0a222e519440a429ead7d11abfffce20a7b765150a5999c\" returns successfully" Jan 27 05:40:08.595363 containerd[1681]: time="2026-01-27T05:40:08.595296701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:08.597306 containerd[1681]: time="2026-01-27T05:40:08.597206297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 05:40:08.597395 containerd[1681]: time="2026-01-27T05:40:08.597255608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:08.597645 kubelet[2895]: E0127 05:40:08.597585 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:40:08.598132 kubelet[2895]: E0127 05:40:08.597666 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:40:08.598132 kubelet[2895]: E0127 05:40:08.597819 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nm94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:08.600651 containerd[1681]: time="2026-01-27T05:40:08.600628785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 05:40:08.678807 kubelet[2895]: E0127 05:40:08.678748 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:40:08.706791 kubelet[2895]: I0127 05:40:08.706719 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nrrrf" podStartSLOduration=38.706692586 podStartE2EDuration="38.706692586s" podCreationTimestamp="2026-01-27 05:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:40:08.691473931 +0000 UTC m=+43.285972519" watchObservedRunningTime="2026-01-27 05:40:08.706692586 +0000 UTC m=+43.301191156" Jan 27 05:40:08.716000 audit[4659]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=4659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:08.716000 audit[4659]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe695438d0 a2=0 a3=7ffe695438bc items=0 ppid=2997 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:08.723003 systemd-networkd[1489]: cali60ccf7a2d1c: Gained IPv6LL Jan 27 05:40:08.731000 audit[4659]: NETFILTER_CFG table=nat:134 family=2 entries=44 op=nft_register_rule pid=4659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:08.731000 audit[4659]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe695438d0 a2=0 a3=7ffe695438bc items=0 ppid=2997 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:08.756000 audit[4661]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=4661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:08.756000 audit[4661]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb08e7e40 a2=0 a3=7ffeb08e7e2c items=0 ppid=2997 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:08.785000 audit[4661]: NETFILTER_CFG table=nat:136 family=2 entries=56 op=nft_register_chain pid=4661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:08.785000 audit[4661]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffeb08e7e40 a2=0 a3=7ffeb08e7e2c items=0 ppid=2997 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:08.785000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:08.935203 containerd[1681]: time="2026-01-27T05:40:08.934755156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:08.938836 containerd[1681]: time="2026-01-27T05:40:08.938560265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 05:40:08.938836 containerd[1681]: time="2026-01-27T05:40:08.938621834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:08.939319 kubelet[2895]: E0127 05:40:08.939220 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:40:08.939319 kubelet[2895]: E0127 05:40:08.939318 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:40:08.940212 kubelet[2895]: E0127 05:40:08.939539 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nm94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:08.941368 kubelet[2895]: E0127 05:40:08.941291 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:40:09.489530 systemd-networkd[1489]: cali29b4fe748bf: Gained IPv6LL Jan 27 05:40:09.506686 containerd[1681]: time="2026-01-27T05:40:09.506626567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9df44df9-7r4l6,Uid:2a3676d6-dbc6-4326-8dba-e3375f935a86,Namespace:calico-apiserver,Attempt:0,}" Jan 27 05:40:09.626342 systemd-networkd[1489]: cali21a5916cf83: Link UP Jan 27 05:40:09.627041 systemd-networkd[1489]: cali21a5916cf83: Gained carrier Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.558 [INFO][4670] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0 calico-apiserver-6d9df44df9- calico-apiserver 2a3676d6-dbc6-4326-8dba-e3375f935a86 794 0 2026-01-27 05:39:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d9df44df9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4592-0-0-n-eb4c5d05b1 calico-apiserver-6d9df44df9-7r4l6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali21a5916cf83 [] [] }} ContainerID="fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-7r4l6" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.558 [INFO][4670] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-7r4l6" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.588 [INFO][4681] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" HandleID="k8s-pod-network.fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.589 [INFO][4681] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" HandleID="k8s-pod-network.fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4592-0-0-n-eb4c5d05b1", "pod":"calico-apiserver-6d9df44df9-7r4l6", "timestamp":"2026-01-27 05:40:09.58898639 +0000 UTC"}, Hostname:"ci-4592-0-0-n-eb4c5d05b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.589 [INFO][4681] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.589 [INFO][4681] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.589 [INFO][4681] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-eb4c5d05b1' Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.595 [INFO][4681] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.599 [INFO][4681] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.602 [INFO][4681] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.603 [INFO][4681] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.605 [INFO][4681] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.605 [INFO][4681] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.606 [INFO][4681] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.612 [INFO][4681] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.619 [INFO][4681] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.70/26] block=192.168.51.64/26 handle="k8s-pod-network.fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.619 [INFO][4681] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.70/26] handle="k8s-pod-network.fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.619 [INFO][4681] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:40:09.642153 containerd[1681]: 2026-01-27 05:40:09.619 [INFO][4681] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.70/26] IPv6=[] ContainerID="fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" HandleID="k8s-pod-network.fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0" Jan 27 05:40:09.642921 containerd[1681]: 2026-01-27 05:40:09.621 [INFO][4670] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-7r4l6" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0", GenerateName:"calico-apiserver-6d9df44df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"2a3676d6-dbc6-4326-8dba-e3375f935a86", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d9df44df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"", Pod:"calico-apiserver-6d9df44df9-7r4l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21a5916cf83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:09.642921 containerd[1681]: 2026-01-27 05:40:09.621 [INFO][4670] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.70/32] ContainerID="fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-7r4l6" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0" Jan 27 05:40:09.642921 containerd[1681]: 2026-01-27 05:40:09.621 [INFO][4670] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21a5916cf83 ContainerID="fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-7r4l6" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0" Jan 27 05:40:09.642921 containerd[1681]: 2026-01-27 05:40:09.627 [INFO][4670] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-7r4l6" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0" Jan 27 05:40:09.642921 containerd[1681]: 2026-01-27 05:40:09.627 [INFO][4670] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-7r4l6" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0", GenerateName:"calico-apiserver-6d9df44df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"2a3676d6-dbc6-4326-8dba-e3375f935a86", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d9df44df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf", Pod:"calico-apiserver-6d9df44df9-7r4l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21a5916cf83", MAC:"fe:22:b1:e2:ca:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:09.642921 containerd[1681]: 2026-01-27 05:40:09.639 [INFO][4670] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-7r4l6" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--7r4l6-eth0" Jan 27 05:40:09.653000 audit[4695]: NETFILTER_CFG table=filter:137 family=2 entries=72 op=nft_register_chain pid=4695 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:09.653000 audit[4695]: SYSCALL arch=c000003e syscall=46 success=yes exit=35812 a0=3 a1=7ffd0867c750 a2=0 a3=7ffd0867c73c items=0 ppid=4121 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:09.653000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:09.677846 containerd[1681]: time="2026-01-27T05:40:09.677277472Z" level=info msg="connecting to shim fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf" address="unix:///run/containerd/s/2593d66ddc50766311b9fe0cdd91e64245d8af701f6cc42a216634de9505eb70" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:09.689141 kubelet[2895]: E0127 05:40:09.688969 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:40:09.723356 systemd[1]: Started cri-containerd-fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf.scope - libcontainer container fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf. Jan 27 05:40:09.735000 audit: BPF prog-id=244 op=LOAD Jan 27 05:40:09.736000 audit: BPF prog-id=245 op=LOAD Jan 27 05:40:09.736000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4703 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:09.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665316230373139343731346562346365613765336365666533396662 Jan 27 05:40:09.736000 audit: BPF prog-id=245 op=UNLOAD Jan 27 05:40:09.736000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4703 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:09.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665316230373139343731346562346365613765336365666533396662 Jan 27 05:40:09.736000 audit: BPF prog-id=246 op=LOAD Jan 27 05:40:09.736000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4703 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:09.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665316230373139343731346562346365613765336365666533396662 Jan 27 05:40:09.736000 audit: BPF prog-id=247 op=LOAD Jan 27 05:40:09.736000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4703 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:09.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665316230373139343731346562346365613765336365666533396662 Jan 27 05:40:09.736000 audit: BPF prog-id=247 op=UNLOAD Jan 27 05:40:09.736000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4703 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:09.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665316230373139343731346562346365613765336365666533396662 Jan 27 05:40:09.736000 audit: BPF prog-id=246 op=UNLOAD Jan 27 05:40:09.736000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4703 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:09.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665316230373139343731346562346365613765336365666533396662 Jan 27 05:40:09.736000 audit: BPF prog-id=248 op=LOAD Jan 27 05:40:09.736000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4703 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:09.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665316230373139343731346562346365613765336365666533396662 Jan 27 05:40:09.772495 containerd[1681]: time="2026-01-27T05:40:09.772320539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9df44df9-7r4l6,Uid:2a3676d6-dbc6-4326-8dba-e3375f935a86,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fe1b07194714eb4cea7e3cefe39fb2e4c7253c41244e417392266ee9b8445ecf\"" Jan 27 05:40:09.775052 containerd[1681]: time="2026-01-27T05:40:09.774607493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:40:10.099839 containerd[1681]: time="2026-01-27T05:40:10.099387275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:10.101338 containerd[1681]: time="2026-01-27T05:40:10.101296835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:40:10.101466 containerd[1681]: time="2026-01-27T05:40:10.101379430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:10.101675 kubelet[2895]: E0127 05:40:10.101629 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:10.101756 kubelet[2895]: E0127 05:40:10.101674 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:10.101942 kubelet[2895]: E0127 05:40:10.101896 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ptxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9df44df9-7r4l6_calico-apiserver(2a3676d6-dbc6-4326-8dba-e3375f935a86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:10.103270 kubelet[2895]: E0127 05:40:10.103232 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:40:10.505052 containerd[1681]: time="2026-01-27T05:40:10.504846033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kbs8f,Uid:22258eaf-cd76-4bd2-ad47-8f4a85b664bd,Namespace:calico-system,Attempt:0,}" Jan 27 05:40:10.505337 containerd[1681]: time="2026-01-27T05:40:10.505293181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9df44df9-9vzkl,Uid:7997e895-ab0d-47da-83eb-264fa47d7c87,Namespace:calico-apiserver,Attempt:0,}" Jan 27 05:40:10.684797 kubelet[2895]: E0127 05:40:10.684632 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:40:10.718259 systemd-networkd[1489]: cali8e7f071c681: Link UP Jan 27 05:40:10.719581 systemd-networkd[1489]: cali8e7f071c681: Gained carrier Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.563 [INFO][4747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0 calico-apiserver-6d9df44df9- calico-apiserver 7997e895-ab0d-47da-83eb-264fa47d7c87 793 0 2026-01-27 05:39:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d9df44df9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4592-0-0-n-eb4c5d05b1 calico-apiserver-6d9df44df9-9vzkl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8e7f071c681 [] [] }} ContainerID="d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-9vzkl" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.563 [INFO][4747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-9vzkl" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.644 [INFO][4768] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" HandleID="k8s-pod-network.d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.644 [INFO][4768] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" HandleID="k8s-pod-network.d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030b790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4592-0-0-n-eb4c5d05b1", "pod":"calico-apiserver-6d9df44df9-9vzkl", "timestamp":"2026-01-27 05:40:10.644425591 +0000 UTC"}, Hostname:"ci-4592-0-0-n-eb4c5d05b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.644 [INFO][4768] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.644 [INFO][4768] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.644 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-eb4c5d05b1' Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.668 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.673 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.678 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.681 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.686 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.686 [INFO][4768] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.690 [INFO][4768] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.697 [INFO][4768] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.712 [INFO][4768] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.71/26] block=192.168.51.64/26 handle="k8s-pod-network.d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.712 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.71/26] handle="k8s-pod-network.d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.712 [INFO][4768] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:40:10.739233 containerd[1681]: 2026-01-27 05:40:10.712 [INFO][4768] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.71/26] IPv6=[] ContainerID="d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" HandleID="k8s-pod-network.d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0" Jan 27 05:40:10.739764 containerd[1681]: 2026-01-27 05:40:10.715 [INFO][4747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-9vzkl" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0", GenerateName:"calico-apiserver-6d9df44df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7997e895-ab0d-47da-83eb-264fa47d7c87", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d9df44df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"", Pod:"calico-apiserver-6d9df44df9-9vzkl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e7f071c681", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:10.739764 containerd[1681]: 2026-01-27 05:40:10.715 [INFO][4747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.71/32] ContainerID="d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-9vzkl" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0" Jan 27 05:40:10.739764 containerd[1681]: 2026-01-27 05:40:10.715 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e7f071c681 ContainerID="d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-9vzkl" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0" Jan 27 05:40:10.739764 containerd[1681]: 2026-01-27 05:40:10.719 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-9vzkl" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0" Jan 27 05:40:10.739764 containerd[1681]: 2026-01-27 05:40:10.720 [INFO][4747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-9vzkl" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0", GenerateName:"calico-apiserver-6d9df44df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7997e895-ab0d-47da-83eb-264fa47d7c87", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d9df44df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe", Pod:"calico-apiserver-6d9df44df9-9vzkl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e7f071c681", MAC:"56:93:c3:b5:7d:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:10.739764 containerd[1681]: 2026-01-27 05:40:10.735 [INFO][4747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" Namespace="calico-apiserver" Pod="calico-apiserver-6d9df44df9-9vzkl" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-calico--apiserver--6d9df44df9--9vzkl-eth0" Jan 27 05:40:10.748000 audit[4790]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:10.748000 audit[4790]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff4dc2eae0 a2=0 a3=7fff4dc2eacc items=0 ppid=2997 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.748000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:10.753000 audit[4790]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:10.753000 audit[4790]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff4dc2eae0 a2=0 a3=7fff4dc2eacc items=0 ppid=2997 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:10.775196 containerd[1681]: time="2026-01-27T05:40:10.774487318Z" level=info msg="connecting to shim d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe" address="unix:///run/containerd/s/8168d4923abb4fda2adae8cb941199e5554a3bcf93e2f9b05f5f9ee056ccf90c" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:10.778000 audit[4795]: NETFILTER_CFG table=filter:140 family=2 entries=53 op=nft_register_chain pid=4795 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:10.778000 audit[4795]: SYSCALL arch=c000003e syscall=46 success=yes exit=26624 a0=3 a1=7fff2eccf3f0 a2=0 a3=7fff2eccf3dc items=0 ppid=4121 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.778000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:10.810329 systemd[1]: Started cri-containerd-d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe.scope - libcontainer container d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe. Jan 27 05:40:10.825440 systemd-networkd[1489]: cali9b5453a0a7e: Link UP Jan 27 05:40:10.826606 systemd-networkd[1489]: cali9b5453a0a7e: Gained carrier Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.561 [INFO][4743] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0 goldmane-666569f655- calico-system 22258eaf-cd76-4bd2-ad47-8f4a85b664bd 796 0 2026-01-27 05:39:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4592-0-0-n-eb4c5d05b1 goldmane-666569f655-kbs8f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9b5453a0a7e [] [] }} ContainerID="124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" Namespace="calico-system" Pod="goldmane-666569f655-kbs8f" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.561 [INFO][4743] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" Namespace="calico-system" Pod="goldmane-666569f655-kbs8f" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.644 [INFO][4773] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" HandleID="k8s-pod-network.124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.646 [INFO][4773] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" HandleID="k8s-pod-network.124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000373b90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4592-0-0-n-eb4c5d05b1", "pod":"goldmane-666569f655-kbs8f", "timestamp":"2026-01-27 05:40:10.644813839 +0000 UTC"}, Hostname:"ci-4592-0-0-n-eb4c5d05b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.648 [INFO][4773] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.713 [INFO][4773] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.713 [INFO][4773] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-eb4c5d05b1' Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.768 [INFO][4773] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.775 [INFO][4773] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.783 [INFO][4773] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.787 [INFO][4773] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.794 [INFO][4773] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.794 [INFO][4773] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.797 [INFO][4773] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192 Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.808 [INFO][4773] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.817 [INFO][4773] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.72/26] block=192.168.51.64/26 handle="k8s-pod-network.124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.817 [INFO][4773] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.72/26] handle="k8s-pod-network.124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" host="ci-4592-0-0-n-eb4c5d05b1" Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.817 [INFO][4773] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:40:10.845154 containerd[1681]: 2026-01-27 05:40:10.817 [INFO][4773] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.72/26] IPv6=[] ContainerID="124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" HandleID="k8s-pod-network.124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" Workload="ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0" Jan 27 05:40:10.846324 containerd[1681]: 2026-01-27 05:40:10.820 [INFO][4743] cni-plugin/k8s.go 418: Populated endpoint ContainerID="124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" Namespace="calico-system" Pod="goldmane-666569f655-kbs8f" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"22258eaf-cd76-4bd2-ad47-8f4a85b664bd", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"", Pod:"goldmane-666569f655-kbs8f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9b5453a0a7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:10.846324 containerd[1681]: 2026-01-27 05:40:10.820 [INFO][4743] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.72/32] ContainerID="124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" Namespace="calico-system" Pod="goldmane-666569f655-kbs8f" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0" Jan 27 05:40:10.846324 containerd[1681]: 2026-01-27 05:40:10.820 [INFO][4743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b5453a0a7e ContainerID="124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" Namespace="calico-system" Pod="goldmane-666569f655-kbs8f" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0" Jan 27 05:40:10.846324 containerd[1681]: 2026-01-27 05:40:10.826 [INFO][4743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" Namespace="calico-system" Pod="goldmane-666569f655-kbs8f" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0" Jan 27 05:40:10.846324 containerd[1681]: 2026-01-27 05:40:10.827 [INFO][4743] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" Namespace="calico-system" Pod="goldmane-666569f655-kbs8f" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"22258eaf-cd76-4bd2-ad47-8f4a85b664bd", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-eb4c5d05b1", ContainerID:"124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192", Pod:"goldmane-666569f655-kbs8f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9b5453a0a7e", MAC:"b2:46:b1:ff:15:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:40:10.846324 containerd[1681]: 2026-01-27 05:40:10.841 [INFO][4743] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" Namespace="calico-system" Pod="goldmane-666569f655-kbs8f" WorkloadEndpoint="ci--4592--0--0--n--eb4c5d05b1-k8s-goldmane--666569f655--kbs8f-eth0" Jan 27 05:40:10.853000 audit: BPF prog-id=249 op=LOAD Jan 27 05:40:10.853000 audit: BPF prog-id=250 op=LOAD Jan 27 05:40:10.853000 audit[4813]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4800 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303961656334643532343933356538663635663161363232636637 Jan 27 05:40:10.853000 audit: BPF prog-id=250 op=UNLOAD Jan 27 05:40:10.853000 audit[4813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4800 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303961656334643532343933356538663635663161363232636637 Jan 27 05:40:10.854000 audit: BPF prog-id=251 op=LOAD Jan 27 05:40:10.854000 audit[4813]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4800 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303961656334643532343933356538663635663161363232636637 Jan 27 05:40:10.854000 audit: BPF prog-id=252 op=LOAD Jan 27 05:40:10.854000 audit[4813]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4800 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303961656334643532343933356538663635663161363232636637 Jan 27 05:40:10.854000 audit: BPF prog-id=252 op=UNLOAD Jan 27 05:40:10.854000 audit[4813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4800 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303961656334643532343933356538663635663161363232636637 Jan 27 05:40:10.854000 audit: BPF prog-id=251 op=UNLOAD Jan 27 05:40:10.854000 audit[4813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4800 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303961656334643532343933356538663635663161363232636637 Jan 27 05:40:10.854000 audit: BPF prog-id=253 op=LOAD Jan 27 05:40:10.854000 audit[4813]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4800 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303961656334643532343933356538663635663161363232636637 Jan 27 05:40:10.883000 audit[4841]: NETFILTER_CFG table=filter:141 family=2 entries=64 op=nft_register_chain pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:40:10.883000 audit[4841]: SYSCALL arch=c000003e syscall=46 success=yes exit=31104 a0=3 a1=7ffd4ed2fb20 a2=0 a3=7ffd4ed2fb0c items=0 ppid=4121 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.883000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:40:10.887120 containerd[1681]: time="2026-01-27T05:40:10.887085904Z" level=info msg="connecting to shim 124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192" address="unix:///run/containerd/s/cf69905b6e02a3ed893592db62645ac9d3b52cf0c2dbd7c653b82ea3372c5845" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:10.931638 containerd[1681]: time="2026-01-27T05:40:10.931607843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9df44df9-9vzkl,Uid:7997e895-ab0d-47da-83eb-264fa47d7c87,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d709aec4d524935e8f65f1a622cf771196ac574f0ac53200ef92018170c3fffe\"" Jan 27 05:40:10.933217 systemd[1]: Started cri-containerd-124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192.scope - libcontainer container 124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192. Jan 27 05:40:10.938117 containerd[1681]: time="2026-01-27T05:40:10.937841883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:40:10.947000 audit: BPF prog-id=254 op=LOAD Jan 27 05:40:10.948000 audit: BPF prog-id=255 op=LOAD Jan 27 05:40:10.948000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132346138313763663130656231636638316462353262646235323461 Jan 27 05:40:10.948000 audit: BPF prog-id=255 op=UNLOAD Jan 27 05:40:10.948000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132346138313763663130656231636638316462353262646235323461 Jan 27 05:40:10.948000 audit: BPF prog-id=256 op=LOAD Jan 27 05:40:10.948000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132346138313763663130656231636638316462353262646235323461 Jan 27 05:40:10.948000 audit: BPF prog-id=257 op=LOAD Jan 27 05:40:10.948000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132346138313763663130656231636638316462353262646235323461 Jan 27 05:40:10.948000 audit: BPF prog-id=257 op=UNLOAD Jan 27 05:40:10.948000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132346138313763663130656231636638316462353262646235323461 Jan 27 05:40:10.948000 audit: BPF prog-id=256 op=UNLOAD Jan 27 05:40:10.948000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132346138313763663130656231636638316462353262646235323461 Jan 27 05:40:10.948000 audit: BPF prog-id=258 op=LOAD Jan 27 05:40:10.948000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:10.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132346138313763663130656231636638316462353262646235323461 Jan 27 05:40:10.995705 containerd[1681]: time="2026-01-27T05:40:10.995634421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kbs8f,Uid:22258eaf-cd76-4bd2-ad47-8f4a85b664bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"124a817cf10eb1cf81db52bdb524a9bdb31a81fd33167c5d37e0240b097ee192\"" Jan 27 05:40:11.267206 containerd[1681]: time="2026-01-27T05:40:11.267169084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:11.268974 containerd[1681]: time="2026-01-27T05:40:11.268893071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:40:11.268974 containerd[1681]: time="2026-01-27T05:40:11.268930028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:11.269159 kubelet[2895]: E0127 05:40:11.269129 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:11.269900 kubelet[2895]: E0127 05:40:11.269388 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:11.269900 kubelet[2895]: E0127 05:40:11.269729 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz57s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9df44df9-9vzkl_calico-apiserver(7997e895-ab0d-47da-83eb-264fa47d7c87): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:11.270058 containerd[1681]: time="2026-01-27T05:40:11.269661841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 05:40:11.271500 kubelet[2895]: E0127 05:40:11.271431 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:40:11.411140 systemd-networkd[1489]: cali21a5916cf83: Gained IPv6LL Jan 27 05:40:11.608059 containerd[1681]: time="2026-01-27T05:40:11.607823154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:11.609464 containerd[1681]: time="2026-01-27T05:40:11.609434627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 05:40:11.609535 containerd[1681]: time="2026-01-27T05:40:11.609503576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:11.609976 kubelet[2895]: E0127 05:40:11.609775 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:40:11.609976 kubelet[2895]: E0127 05:40:11.609817 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:40:11.609976 kubelet[2895]: E0127 05:40:11.609932 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2fvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kbs8f_calico-system(22258eaf-cd76-4bd2-ad47-8f4a85b664bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:11.611489 kubelet[2895]: E0127 05:40:11.611444 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:40:11.688839 kubelet[2895]: E0127 05:40:11.688787 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:40:11.690685 kubelet[2895]: E0127 05:40:11.690591 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:40:11.690685 kubelet[2895]: E0127 05:40:11.690674 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:40:11.723536 kernel: kauditd_printk_skb: 192 callbacks suppressed Jan 27 05:40:11.723642 kernel: audit: type=1325 audit(1769492411.718:749): table=filter:142 family=2 entries=14 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:11.718000 audit[4896]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:11.718000 audit[4896]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe05b3b9e0 a2=0 a3=7ffe05b3b9cc items=0 ppid=2997 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:11.729170 kernel: audit: type=1300 audit(1769492411.718:749): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe05b3b9e0 a2=0 a3=7ffe05b3b9cc items=0 ppid=2997 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:11.732098 kernel: audit: type=1327 audit(1769492411.718:749): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:11.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:11.734000 audit[4896]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:11.734000 audit[4896]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe05b3b9e0 a2=0 a3=7ffe05b3b9cc items=0 ppid=2997 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:11.739709 kernel: audit: type=1325 audit(1769492411.734:750): table=nat:143 family=2 entries=20 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:11.739769 kernel: audit: type=1300 audit(1769492411.734:750): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe05b3b9e0 a2=0 a3=7ffe05b3b9cc items=0 ppid=2997 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:11.734000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:11.743911 kernel: audit: type=1327 audit(1769492411.734:750): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:11.751000 audit[4898]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:11.755081 kernel: audit: type=1325 audit(1769492411.751:751): table=filter:144 family=2 entries=14 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:11.751000 audit[4898]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb12c0830 a2=0 a3=7ffdb12c081c items=0 ppid=2997 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:11.751000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:11.761265 kernel: audit: type=1300 audit(1769492411.751:751): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb12c0830 a2=0 a3=7ffdb12c081c items=0 ppid=2997 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:11.761337 kernel: audit: type=1327 audit(1769492411.751:751): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:11.760000 audit[4898]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:11.760000 audit[4898]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdb12c0830 a2=0 a3=7ffdb12c081c items=0 ppid=2997 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:11.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:11.765072 kernel: audit: type=1325 audit(1769492411.760:752): table=nat:145 family=2 entries=20 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:12.177167 systemd-networkd[1489]: cali8e7f071c681: Gained IPv6LL Jan 27 05:40:12.626258 systemd-networkd[1489]: cali9b5453a0a7e: Gained IPv6LL Jan 27 05:40:12.692372 kubelet[2895]: E0127 05:40:12.692328 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:40:12.692827 kubelet[2895]: E0127 05:40:12.692496 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:40:16.507121 containerd[1681]: time="2026-01-27T05:40:16.506888431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 05:40:16.844081 containerd[1681]: time="2026-01-27T05:40:16.843951289Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:16.845417 containerd[1681]: time="2026-01-27T05:40:16.845380572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 05:40:16.845521 containerd[1681]: time="2026-01-27T05:40:16.845457491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:16.845646 kubelet[2895]: E0127 05:40:16.845589 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:40:16.845934 kubelet[2895]: E0127 05:40:16.845651 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:40:16.845934 kubelet[2895]: E0127 05:40:16.845779 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:57b1faf7f7fc40c5b00dfb0c507a2180,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdngs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bd985ccf-jxmbn_calico-system(030cf2c9-9900-4225-8b2a-d77c13f08480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:16.848046 containerd[1681]: time="2026-01-27T05:40:16.847890087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 05:40:17.187321 containerd[1681]: time="2026-01-27T05:40:17.187208047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:17.188827 containerd[1681]: time="2026-01-27T05:40:17.188774389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 05:40:17.188995 containerd[1681]: time="2026-01-27T05:40:17.188802691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:17.189586 kubelet[2895]: E0127 05:40:17.189095 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:40:17.189586 kubelet[2895]: E0127 05:40:17.189138 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:40:17.189586 kubelet[2895]: E0127 05:40:17.189240 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdngs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bd985ccf-jxmbn_calico-system(030cf2c9-9900-4225-8b2a-d77c13f08480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:17.190530 kubelet[2895]: E0127 05:40:17.190482 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:40:22.505697 containerd[1681]: time="2026-01-27T05:40:22.505664922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 05:40:22.837573 containerd[1681]: time="2026-01-27T05:40:22.837417665Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:22.839656 containerd[1681]: time="2026-01-27T05:40:22.839494869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 05:40:22.839656 containerd[1681]: time="2026-01-27T05:40:22.839534346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:22.839826 kubelet[2895]: E0127 05:40:22.839745 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:40:22.839826 kubelet[2895]: E0127 05:40:22.839786 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:40:22.840257 kubelet[2895]: E0127 05:40:22.839889 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nm94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:22.842093 containerd[1681]: time="2026-01-27T05:40:22.842067439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 05:40:23.202907 containerd[1681]: time="2026-01-27T05:40:23.202660227Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:23.205567 containerd[1681]: time="2026-01-27T05:40:23.205499298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 05:40:23.205752 containerd[1681]: time="2026-01-27T05:40:23.205646103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:23.205986 kubelet[2895]: E0127 05:40:23.205939 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:40:23.206127 kubelet[2895]: E0127 05:40:23.206107 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:40:23.206412 kubelet[2895]: E0127 05:40:23.206357 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nm94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:23.208214 kubelet[2895]: E0127 05:40:23.207904 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:40:23.506145 containerd[1681]: time="2026-01-27T05:40:23.505808389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 05:40:23.847064 containerd[1681]: time="2026-01-27T05:40:23.846757193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:23.850668 containerd[1681]: time="2026-01-27T05:40:23.850418854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 05:40:23.851004 containerd[1681]: time="2026-01-27T05:40:23.850423154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:23.851455 kubelet[2895]: E0127 05:40:23.851376 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:40:23.852075 kubelet[2895]: E0127 05:40:23.851483 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:40:23.852075 kubelet[2895]: E0127 05:40:23.851765 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jq9j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68f86b6c77-cbrw7_calico-system(cf580b0a-7ab1-4b43-ad9f-7219ad766e09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:23.853917 kubelet[2895]: E0127 05:40:23.853848 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:40:24.505307 containerd[1681]: time="2026-01-27T05:40:24.505242682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:40:24.867776 containerd[1681]: time="2026-01-27T05:40:24.867566344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:24.869200 containerd[1681]: time="2026-01-27T05:40:24.869109527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:40:24.869359 containerd[1681]: time="2026-01-27T05:40:24.869294803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:24.870059 containerd[1681]: time="2026-01-27T05:40:24.869806991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 05:40:24.870121 kubelet[2895]: E0127 05:40:24.869491 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:24.870121 kubelet[2895]: E0127 05:40:24.869534 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:24.870121 kubelet[2895]: E0127 05:40:24.869763 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ptxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9df44df9-7r4l6_calico-apiserver(2a3676d6-dbc6-4326-8dba-e3375f935a86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:24.872067 kubelet[2895]: E0127 05:40:24.871758 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:40:25.201338 containerd[1681]: time="2026-01-27T05:40:25.201166989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:25.203391 containerd[1681]: time="2026-01-27T05:40:25.203321230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 05:40:25.203513 containerd[1681]: time="2026-01-27T05:40:25.203421677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:25.203683 kubelet[2895]: E0127 05:40:25.203622 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:40:25.204351 kubelet[2895]: E0127 05:40:25.203699 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:40:25.204351 kubelet[2895]: E0127 05:40:25.203869 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2fvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kbs8f_calico-system(22258eaf-cd76-4bd2-ad47-8f4a85b664bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:25.205268 kubelet[2895]: E0127 05:40:25.205232 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:40:25.505425 containerd[1681]: time="2026-01-27T05:40:25.504959048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:40:25.841869 containerd[1681]: time="2026-01-27T05:40:25.841709325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:25.844006 containerd[1681]: time="2026-01-27T05:40:25.843945516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:40:25.844177 containerd[1681]: time="2026-01-27T05:40:25.843971885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:25.844610 kubelet[2895]: E0127 05:40:25.844333 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:25.844610 kubelet[2895]: E0127 05:40:25.844388 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:25.844610 kubelet[2895]: E0127 05:40:25.844541 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz57s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9df44df9-9vzkl_calico-apiserver(7997e895-ab0d-47da-83eb-264fa47d7c87): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:25.846130 kubelet[2895]: E0127 05:40:25.846083 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:40:32.507353 kubelet[2895]: E0127 05:40:32.507291 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:40:36.505102 kubelet[2895]: E0127 05:40:36.505044 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:40:36.509071 kubelet[2895]: E0127 05:40:36.508408 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:40:38.504775 kubelet[2895]: E0127 05:40:38.504741 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:40:38.505188 kubelet[2895]: E0127 05:40:38.504864 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:40:39.508765 kubelet[2895]: E0127 05:40:39.508731 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:40:46.505624 containerd[1681]: time="2026-01-27T05:40:46.505408815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 05:40:46.864722 containerd[1681]: time="2026-01-27T05:40:46.864612553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:46.877265 containerd[1681]: time="2026-01-27T05:40:46.877200106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 05:40:46.877407 containerd[1681]: time="2026-01-27T05:40:46.877295151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:46.877596 kubelet[2895]: E0127 05:40:46.877555 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:40:46.877874 kubelet[2895]: E0127 05:40:46.877603 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:40:46.877874 kubelet[2895]: E0127 05:40:46.877705 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:57b1faf7f7fc40c5b00dfb0c507a2180,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdngs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bd985ccf-jxmbn_calico-system(030cf2c9-9900-4225-8b2a-d77c13f08480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:46.879833 containerd[1681]: time="2026-01-27T05:40:46.879804677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 05:40:47.234931 containerd[1681]: time="2026-01-27T05:40:47.234858054Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:47.237571 containerd[1681]: time="2026-01-27T05:40:47.237535404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 05:40:47.237642 containerd[1681]: time="2026-01-27T05:40:47.237612838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:47.238208 kubelet[2895]: E0127 05:40:47.238179 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:40:47.238362 kubelet[2895]: E0127 05:40:47.238306 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:40:47.238509 kubelet[2895]: E0127 05:40:47.238482 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdngs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bd985ccf-jxmbn_calico-system(030cf2c9-9900-4225-8b2a-d77c13f08480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:47.239704 kubelet[2895]: E0127 05:40:47.239671 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:40:47.506272 containerd[1681]: time="2026-01-27T05:40:47.506178359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 05:40:47.847589 containerd[1681]: time="2026-01-27T05:40:47.847399559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:47.849095 containerd[1681]: time="2026-01-27T05:40:47.849058486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 05:40:47.849419 containerd[1681]: time="2026-01-27T05:40:47.849328403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:47.849455 kubelet[2895]: E0127 05:40:47.849345 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:40:47.849455 kubelet[2895]: E0127 05:40:47.849400 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:40:47.849790 kubelet[2895]: E0127 05:40:47.849745 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nm94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:47.852244 containerd[1681]: time="2026-01-27T05:40:47.852053252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 05:40:48.192431 containerd[1681]: time="2026-01-27T05:40:48.192318837Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:48.194007 containerd[1681]: time="2026-01-27T05:40:48.193974319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 05:40:48.194092 containerd[1681]: time="2026-01-27T05:40:48.194060259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:48.194263 kubelet[2895]: E0127 05:40:48.194229 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:40:48.194507 kubelet[2895]: E0127 05:40:48.194278 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:40:48.194507 kubelet[2895]: E0127 05:40:48.194414 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nm94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:48.195704 kubelet[2895]: E0127 05:40:48.195670 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:40:49.507358 containerd[1681]: time="2026-01-27T05:40:49.507080418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:40:49.842096 containerd[1681]: time="2026-01-27T05:40:49.841968896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:49.843627 containerd[1681]: time="2026-01-27T05:40:49.843589600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:40:49.843721 containerd[1681]: time="2026-01-27T05:40:49.843666001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:49.843810 kubelet[2895]: E0127 05:40:49.843783 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:49.844094 kubelet[2895]: E0127 05:40:49.843825 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:49.844094 kubelet[2895]: E0127 05:40:49.843931 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz57s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9df44df9-9vzkl_calico-apiserver(7997e895-ab0d-47da-83eb-264fa47d7c87): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:49.845122 kubelet[2895]: E0127 05:40:49.845088 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:40:50.505403 containerd[1681]: time="2026-01-27T05:40:50.505364615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 05:40:50.826791 containerd[1681]: time="2026-01-27T05:40:50.826560655Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:50.828132 containerd[1681]: time="2026-01-27T05:40:50.828094829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 05:40:50.828295 containerd[1681]: time="2026-01-27T05:40:50.828129875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:50.828467 kubelet[2895]: E0127 05:40:50.828434 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:40:50.828519 kubelet[2895]: E0127 05:40:50.828481 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:40:50.829076 kubelet[2895]: E0127 05:40:50.828696 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jq9j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68f86b6c77-cbrw7_calico-system(cf580b0a-7ab1-4b43-ad9f-7219ad766e09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:50.830400 kubelet[2895]: E0127 05:40:50.830358 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:40:53.507635 containerd[1681]: time="2026-01-27T05:40:53.506607991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 05:40:53.833666 containerd[1681]: time="2026-01-27T05:40:53.833438508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:53.835287 containerd[1681]: time="2026-01-27T05:40:53.835175064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 05:40:53.835287 containerd[1681]: time="2026-01-27T05:40:53.835262668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:53.835533 kubelet[2895]: E0127 05:40:53.835476 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:40:53.836071 kubelet[2895]: E0127 05:40:53.835622 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:40:53.836112 containerd[1681]: time="2026-01-27T05:40:53.836059086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:40:53.836664 kubelet[2895]: E0127 05:40:53.836518 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2fvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kbs8f_calico-system(22258eaf-cd76-4bd2-ad47-8f4a85b664bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:53.837998 kubelet[2895]: E0127 05:40:53.837951 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:40:54.161984 containerd[1681]: time="2026-01-27T05:40:54.161532558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:40:54.163386 containerd[1681]: time="2026-01-27T05:40:54.163337550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:40:54.163471 containerd[1681]: time="2026-01-27T05:40:54.163419222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:54.163804 kubelet[2895]: E0127 05:40:54.163766 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:54.163865 kubelet[2895]: E0127 05:40:54.163839 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:40:54.164115 kubelet[2895]: E0127 05:40:54.164077 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ptxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9df44df9-7r4l6_calico-apiserver(2a3676d6-dbc6-4326-8dba-e3375f935a86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:40:54.165961 kubelet[2895]: E0127 05:40:54.165934 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:41:01.512298 kubelet[2895]: E0127 05:41:01.511485 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:41:01.512739 kubelet[2895]: E0127 05:41:01.512413 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:41:03.514961 kubelet[2895]: E0127 05:41:03.514880 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:41:05.506147 kubelet[2895]: E0127 05:41:05.505873 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:41:05.506147 kubelet[2895]: E0127 05:41:05.505941 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:41:07.506613 kubelet[2895]: E0127 05:41:07.506511 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:41:12.506370 kubelet[2895]: E0127 05:41:12.506263 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:41:14.505185 kubelet[2895]: E0127 05:41:14.504591 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:41:16.505524 kubelet[2895]: E0127 05:41:16.505327 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:41:16.508882 kubelet[2895]: E0127 05:41:16.506629 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:41:20.506050 kubelet[2895]: E0127 05:41:20.505594 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:41:20.506050 kubelet[2895]: E0127 05:41:20.505625 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:41:24.505463 kubelet[2895]: E0127 05:41:24.505416 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:41:27.505953 kubelet[2895]: E0127 05:41:27.505361 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:41:28.505328 kubelet[2895]: E0127 05:41:28.505293 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:41:28.506653 containerd[1681]: time="2026-01-27T05:41:28.506232282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 05:41:28.838071 containerd[1681]: time="2026-01-27T05:41:28.837940490Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:28.839767 containerd[1681]: time="2026-01-27T05:41:28.839709429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 05:41:28.839767 containerd[1681]: time="2026-01-27T05:41:28.839736060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:28.840048 kubelet[2895]: E0127 05:41:28.840008 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:41:28.840438 kubelet[2895]: E0127 05:41:28.840125 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:41:28.840438 kubelet[2895]: E0127 05:41:28.840241 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nm94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:28.842739 containerd[1681]: time="2026-01-27T05:41:28.842717654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 05:41:29.172419 containerd[1681]: time="2026-01-27T05:41:29.172319747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:29.173948 containerd[1681]: time="2026-01-27T05:41:29.173909840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 05:41:29.174089 containerd[1681]: time="2026-01-27T05:41:29.173998378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:29.174352 kubelet[2895]: E0127 05:41:29.174282 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:41:29.174352 kubelet[2895]: E0127 05:41:29.174326 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:41:29.174748 kubelet[2895]: E0127 05:41:29.174715 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nm94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:29.175951 kubelet[2895]: E0127 05:41:29.175920 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:41:33.506159 containerd[1681]: time="2026-01-27T05:41:33.506059313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 05:41:33.841636 containerd[1681]: time="2026-01-27T05:41:33.841490777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:33.847156 containerd[1681]: time="2026-01-27T05:41:33.846582434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 05:41:33.847156 containerd[1681]: time="2026-01-27T05:41:33.846663520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:33.849453 kubelet[2895]: E0127 05:41:33.849378 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:41:33.850574 kubelet[2895]: E0127 05:41:33.849890 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:41:33.851558 kubelet[2895]: E0127 05:41:33.851495 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jq9j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68f86b6c77-cbrw7_calico-system(cf580b0a-7ab1-4b43-ad9f-7219ad766e09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:33.852939 kubelet[2895]: E0127 05:41:33.852875 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:41:35.505881 containerd[1681]: time="2026-01-27T05:41:35.505689213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 05:41:35.842913 containerd[1681]: time="2026-01-27T05:41:35.842531357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:35.845052 containerd[1681]: time="2026-01-27T05:41:35.844965997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 05:41:35.845199 containerd[1681]: time="2026-01-27T05:41:35.845160405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:35.845418 kubelet[2895]: E0127 05:41:35.845372 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:41:35.845774 kubelet[2895]: E0127 05:41:35.845440 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:41:35.845774 kubelet[2895]: E0127 05:41:35.845618 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2fvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kbs8f_calico-system(22258eaf-cd76-4bd2-ad47-8f4a85b664bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:35.846862 kubelet[2895]: E0127 05:41:35.846818 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:41:38.505466 containerd[1681]: time="2026-01-27T05:41:38.505301365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 05:41:38.820788 containerd[1681]: time="2026-01-27T05:41:38.820659175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:38.822753 containerd[1681]: time="2026-01-27T05:41:38.822705152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 05:41:38.822853 containerd[1681]: time="2026-01-27T05:41:38.822789446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:38.823313 kubelet[2895]: E0127 05:41:38.823117 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:41:38.824750 kubelet[2895]: E0127 05:41:38.824359 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:41:38.824750 kubelet[2895]: E0127 05:41:38.824486 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:57b1faf7f7fc40c5b00dfb0c507a2180,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdngs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bd985ccf-jxmbn_calico-system(030cf2c9-9900-4225-8b2a-d77c13f08480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:38.826518 containerd[1681]: time="2026-01-27T05:41:38.826478356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 05:41:39.168536 containerd[1681]: time="2026-01-27T05:41:39.168094751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:39.173506 containerd[1681]: time="2026-01-27T05:41:39.173416919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 05:41:39.173506 containerd[1681]: time="2026-01-27T05:41:39.173464084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:39.173791 kubelet[2895]: E0127 05:41:39.173759 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:41:39.173930 kubelet[2895]: E0127 05:41:39.173888 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:41:39.174495 kubelet[2895]: E0127 05:41:39.174452 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdngs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bd985ccf-jxmbn_calico-system(030cf2c9-9900-4225-8b2a-d77c13f08480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:39.175744 kubelet[2895]: E0127 05:41:39.175708 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:41:39.239981 systemd[1]: Started sshd@7-10.0.7.41:22-4.153.228.146:46768.service - OpenSSH per-connection server daemon (4.153.228.146:46768). Jan 27 05:41:39.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.7.41:22-4.153.228.146:46768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:39.241340 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 27 05:41:39.241374 kernel: audit: type=1130 audit(1769492499.239:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.7.41:22-4.153.228.146:46768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:39.508432 containerd[1681]: time="2026-01-27T05:41:39.507984803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:41:39.858000 audit[5041]: USER_ACCT pid=5041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:39.862787 sshd[5041]: Accepted publickey for core from 4.153.228.146 port 46768 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:41:39.863559 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:41:39.864953 containerd[1681]: time="2026-01-27T05:41:39.864918021Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:39.865098 kernel: audit: type=1101 audit(1769492499.858:754): pid=5041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:39.860000 audit[5041]: CRED_ACQ pid=5041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:39.870119 kernel: audit: type=1103 audit(1769492499.860:755): pid=5041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:39.860000 audit[5041]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8d614110 a2=3 a3=0 items=0 ppid=1 pid=5041 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:39.874736 containerd[1681]: time="2026-01-27T05:41:39.874524464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:41:39.874736 containerd[1681]: time="2026-01-27T05:41:39.874611274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:39.874920 kernel: audit: type=1006 audit(1769492499.860:756): pid=5041 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 27 05:41:39.874956 kernel: audit: type=1300 audit(1769492499.860:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8d614110 a2=3 a3=0 items=0 ppid=1 pid=5041 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:39.877883 kubelet[2895]: E0127 05:41:39.877846 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:39.879388 kernel: audit: type=1327 audit(1769492499.860:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:41:39.860000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:41:39.879555 kubelet[2895]: E0127 05:41:39.877896 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:39.879555 kubelet[2895]: E0127 05:41:39.878009 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz57s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9df44df9-9vzkl_calico-apiserver(7997e895-ab0d-47da-83eb-264fa47d7c87): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:39.882120 kubelet[2895]: E0127 05:41:39.881276 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:41:39.882601 systemd-logind[1655]: New session 9 of user core. Jan 27 05:41:39.889677 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 27 05:41:39.893000 audit[5041]: USER_START pid=5041 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:39.900094 kernel: audit: type=1105 audit(1769492499.893:757): pid=5041 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:39.899000 audit[5045]: CRED_ACQ pid=5045 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:39.905191 kernel: audit: type=1103 audit(1769492499.899:758): pid=5045 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:40.240578 sshd[5045]: Connection closed by 4.153.228.146 port 46768 Jan 27 05:41:40.240894 sshd-session[5041]: pam_unix(sshd:session): session closed for user core Jan 27 05:41:40.241000 audit[5041]: USER_END pid=5041 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:40.245197 systemd-logind[1655]: Session 9 logged out. Waiting for processes to exit. Jan 27 05:41:40.246360 systemd[1]: sshd@7-10.0.7.41:22-4.153.228.146:46768.service: Deactivated successfully. Jan 27 05:41:40.248663 systemd[1]: session-9.scope: Deactivated successfully. Jan 27 05:41:40.249059 kernel: audit: type=1106 audit(1769492500.241:759): pid=5041 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:40.241000 audit[5041]: CRED_DISP pid=5041 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:40.251028 systemd-logind[1655]: Removed session 9. Jan 27 05:41:40.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.7.41:22-4.153.228.146:46768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:40.253098 kernel: audit: type=1104 audit(1769492500.241:760): pid=5041 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:41.504936 containerd[1681]: time="2026-01-27T05:41:41.504882463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:41:41.832323 containerd[1681]: time="2026-01-27T05:41:41.831981071Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:41.833639 containerd[1681]: time="2026-01-27T05:41:41.833613668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:41:41.833727 containerd[1681]: time="2026-01-27T05:41:41.833679520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:41.834086 kubelet[2895]: E0127 05:41:41.833855 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:41.834086 kubelet[2895]: E0127 05:41:41.833902 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:41.834086 kubelet[2895]: E0127 05:41:41.834029 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ptxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9df44df9-7r4l6_calico-apiserver(2a3676d6-dbc6-4326-8dba-e3375f935a86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:41.835791 kubelet[2895]: E0127 05:41:41.835753 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:41:43.509794 kubelet[2895]: E0127 05:41:43.509749 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:41:45.352436 systemd[1]: Started sshd@8-10.0.7.41:22-4.153.228.146:43962.service - OpenSSH per-connection server daemon (4.153.228.146:43962). Jan 27 05:41:45.356062 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:41:45.356137 kernel: audit: type=1130 audit(1769492505.351:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.7.41:22-4.153.228.146:43962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:45.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.7.41:22-4.153.228.146:43962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:45.505612 kubelet[2895]: E0127 05:41:45.505359 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:41:45.894000 audit[5064]: USER_ACCT pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:45.899453 sshd[5064]: Accepted publickey for core from 4.153.228.146 port 43962 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:41:45.900344 kernel: audit: type=1101 audit(1769492505.894:763): pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:45.899000 audit[5064]: CRED_ACQ pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:45.904054 kernel: audit: type=1103 audit(1769492505.899:764): pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:45.904855 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:41:45.908065 kernel: audit: type=1006 audit(1769492505.903:765): pid=5064 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 27 05:41:45.903000 audit[5064]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc97dcce10 a2=3 a3=0 items=0 ppid=1 pid=5064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:45.903000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:41:45.913063 kernel: audit: type=1300 audit(1769492505.903:765): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc97dcce10 a2=3 a3=0 items=0 ppid=1 pid=5064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:45.913124 kernel: audit: type=1327 audit(1769492505.903:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:41:45.922099 systemd-logind[1655]: New session 10 of user core. Jan 27 05:41:45.931254 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 27 05:41:45.934000 audit[5064]: USER_START pid=5064 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:45.942444 kernel: audit: type=1105 audit(1769492505.934:766): pid=5064 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:45.941000 audit[5068]: CRED_ACQ pid=5068 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:45.951050 kernel: audit: type=1103 audit(1769492505.941:767): pid=5068 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:46.277147 sshd[5068]: Connection closed by 4.153.228.146 port 43962 Jan 27 05:41:46.277486 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Jan 27 05:41:46.278000 audit[5064]: USER_END pid=5064 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:46.282524 systemd[1]: sshd@8-10.0.7.41:22-4.153.228.146:43962.service: Deactivated successfully. Jan 27 05:41:46.278000 audit[5064]: CRED_DISP pid=5064 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:46.286121 systemd[1]: session-10.scope: Deactivated successfully. Jan 27 05:41:46.286517 kernel: audit: type=1106 audit(1769492506.278:768): pid=5064 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:46.286567 kernel: audit: type=1104 audit(1769492506.278:769): pid=5064 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:46.288173 systemd-logind[1655]: Session 10 logged out. Waiting for processes to exit. Jan 27 05:41:46.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.7.41:22-4.153.228.146:43962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:46.291612 systemd-logind[1655]: Removed session 10. Jan 27 05:41:48.506060 kubelet[2895]: E0127 05:41:48.505582 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:41:51.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.7.41:22-4.153.228.146:43966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:51.384704 systemd[1]: Started sshd@9-10.0.7.41:22-4.153.228.146:43966.service - OpenSSH per-connection server daemon (4.153.228.146:43966). Jan 27 05:41:51.386000 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:41:51.386072 kernel: audit: type=1130 audit(1769492511.383:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.7.41:22-4.153.228.146:43966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:51.506241 kubelet[2895]: E0127 05:41:51.506189 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:41:51.910000 audit[5081]: USER_ACCT pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:51.913003 sshd[5081]: Accepted publickey for core from 4.153.228.146 port 43966 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:41:51.915730 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:41:51.914000 audit[5081]: CRED_ACQ pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:51.920989 kernel: audit: type=1101 audit(1769492511.910:772): pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:51.921049 kernel: audit: type=1103 audit(1769492511.914:773): pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:51.926245 kernel: audit: type=1006 audit(1769492511.914:774): pid=5081 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 27 05:41:51.926129 systemd-logind[1655]: New session 11 of user core. Jan 27 05:41:51.914000 audit[5081]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2517f4f0 a2=3 a3=0 items=0 ppid=1 pid=5081 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:51.930722 kernel: audit: type=1300 audit(1769492511.914:774): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2517f4f0 a2=3 a3=0 items=0 ppid=1 pid=5081 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:51.914000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:41:51.935008 kernel: audit: type=1327 audit(1769492511.914:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:41:51.935332 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 27 05:41:51.938000 audit[5081]: USER_START pid=5081 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:51.940000 audit[5085]: CRED_ACQ pid=5085 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:51.946467 kernel: audit: type=1105 audit(1769492511.938:775): pid=5081 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:51.946505 kernel: audit: type=1103 audit(1769492511.940:776): pid=5085 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:52.290992 sshd[5085]: Connection closed by 4.153.228.146 port 43966 Jan 27 05:41:52.293186 sshd-session[5081]: pam_unix(sshd:session): session closed for user core Jan 27 05:41:52.293000 audit[5081]: USER_END pid=5081 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:52.300053 kernel: audit: type=1106 audit(1769492512.293:777): pid=5081 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:52.303221 systemd[1]: sshd@9-10.0.7.41:22-4.153.228.146:43966.service: Deactivated successfully. Jan 27 05:41:52.306554 systemd[1]: session-11.scope: Deactivated successfully. Jan 27 05:41:52.293000 audit[5081]: CRED_DISP pid=5081 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:52.311813 kernel: audit: type=1104 audit(1769492512.293:778): pid=5081 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:52.310823 systemd-logind[1655]: Session 11 logged out. Waiting for processes to exit. Jan 27 05:41:52.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.7.41:22-4.153.228.146:43966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:52.313156 systemd-logind[1655]: Removed session 11. Jan 27 05:41:52.404301 systemd[1]: Started sshd@10-10.0.7.41:22-4.153.228.146:43968.service - OpenSSH per-connection server daemon (4.153.228.146:43968). Jan 27 05:41:52.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.7.41:22-4.153.228.146:43968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:52.506356 kubelet[2895]: E0127 05:41:52.505558 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:41:52.943000 audit[5099]: USER_ACCT pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:52.944573 sshd[5099]: Accepted publickey for core from 4.153.228.146 port 43968 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:41:52.944000 audit[5099]: CRED_ACQ pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:52.944000 audit[5099]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce4935ed0 a2=3 a3=0 items=0 ppid=1 pid=5099 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:52.944000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:41:52.945937 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:41:52.950112 systemd-logind[1655]: New session 12 of user core. Jan 27 05:41:52.958326 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 27 05:41:52.961000 audit[5099]: USER_START pid=5099 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:52.963000 audit[5103]: CRED_ACQ pid=5103 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:53.337756 sshd[5103]: Connection closed by 4.153.228.146 port 43968 Jan 27 05:41:53.336000 audit[5099]: USER_END pid=5099 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:53.336678 sshd-session[5099]: pam_unix(sshd:session): session closed for user core Jan 27 05:41:53.337000 audit[5099]: CRED_DISP pid=5099 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:53.340045 systemd[1]: sshd@10-10.0.7.41:22-4.153.228.146:43968.service: Deactivated successfully. Jan 27 05:41:53.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.7.41:22-4.153.228.146:43968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:53.342698 systemd[1]: session-12.scope: Deactivated successfully. Jan 27 05:41:53.343704 systemd-logind[1655]: Session 12 logged out. Waiting for processes to exit. Jan 27 05:41:53.345686 systemd-logind[1655]: Removed session 12. Jan 27 05:41:53.449322 systemd[1]: Started sshd@11-10.0.7.41:22-4.153.228.146:43974.service - OpenSSH per-connection server daemon (4.153.228.146:43974). Jan 27 05:41:53.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.7.41:22-4.153.228.146:43974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:53.989000 audit[5113]: USER_ACCT pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:53.991585 sshd[5113]: Accepted publickey for core from 4.153.228.146 port 43974 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:41:53.992000 audit[5113]: CRED_ACQ pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:53.992000 audit[5113]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf6ccc3d0 a2=3 a3=0 items=0 ppid=1 pid=5113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:53.992000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:41:53.994004 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:41:54.000134 systemd-logind[1655]: New session 13 of user core. Jan 27 05:41:54.004266 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 27 05:41:54.006000 audit[5113]: USER_START pid=5113 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:54.008000 audit[5120]: CRED_ACQ pid=5120 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:54.387422 sshd[5120]: Connection closed by 4.153.228.146 port 43974 Jan 27 05:41:54.388116 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Jan 27 05:41:54.389000 audit[5113]: USER_END pid=5113 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:54.389000 audit[5113]: CRED_DISP pid=5113 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:41:54.394176 systemd-logind[1655]: Session 13 logged out. Waiting for processes to exit. Jan 27 05:41:54.394470 systemd[1]: sshd@11-10.0.7.41:22-4.153.228.146:43974.service: Deactivated successfully. Jan 27 05:41:54.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.7.41:22-4.153.228.146:43974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:54.397226 systemd[1]: session-13.scope: Deactivated successfully. Jan 27 05:41:54.400460 systemd-logind[1655]: Removed session 13. Jan 27 05:41:55.130995 update_engine[1659]: I20260127 05:41:55.130911 1659 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 27 05:41:55.130995 update_engine[1659]: I20260127 05:41:55.130970 1659 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 27 05:41:55.131389 update_engine[1659]: I20260127 05:41:55.131224 1659 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 27 05:41:55.131931 update_engine[1659]: I20260127 05:41:55.131645 1659 omaha_request_params.cc:62] Current group set to developer Jan 27 05:41:55.131931 update_engine[1659]: I20260127 05:41:55.131758 1659 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 27 05:41:55.131931 update_engine[1659]: I20260127 05:41:55.131767 1659 update_attempter.cc:643] Scheduling an action processor start. Jan 27 05:41:55.131931 update_engine[1659]: I20260127 05:41:55.131785 1659 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 27 05:41:55.139488 locksmithd[1703]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 27 05:41:55.140359 update_engine[1659]: I20260127 05:41:55.140285 1659 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 27 05:41:55.140404 update_engine[1659]: I20260127 05:41:55.140381 1659 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 27 05:41:55.140404 update_engine[1659]: I20260127 05:41:55.140389 1659 omaha_request_action.cc:272] Request: Jan 27 05:41:55.140404 update_engine[1659]: Jan 27 05:41:55.140404 update_engine[1659]: Jan 27 05:41:55.140404 update_engine[1659]: Jan 27 05:41:55.140404 update_engine[1659]: Jan 27 05:41:55.140404 update_engine[1659]: Jan 27 05:41:55.140404 update_engine[1659]: Jan 27 05:41:55.140404 update_engine[1659]: Jan 27 05:41:55.140404 update_engine[1659]: Jan 27 05:41:55.140404 update_engine[1659]: I20260127 05:41:55.140396 1659 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 27 05:41:55.143839 update_engine[1659]: I20260127 05:41:55.143796 1659 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 27 05:41:55.144330 update_engine[1659]: I20260127 05:41:55.144296 1659 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 27 05:41:55.152196 update_engine[1659]: E20260127 05:41:55.152168 1659 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 27 05:41:55.152271 update_engine[1659]: I20260127 05:41:55.152235 1659 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 27 05:41:55.506164 kubelet[2895]: E0127 05:41:55.506129 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:41:56.506109 kubelet[2895]: E0127 05:41:56.506050 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:41:59.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.7.41:22-4.153.228.146:46028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:59.496729 systemd[1]: Started sshd@12-10.0.7.41:22-4.153.228.146:46028.service - OpenSSH per-connection server daemon (4.153.228.146:46028). Jan 27 05:41:59.497782 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 27 05:41:59.497874 kernel: audit: type=1130 audit(1769492519.495:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.7.41:22-4.153.228.146:46028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:41:59.509824 kubelet[2895]: E0127 05:41:59.509173 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:42:00.040000 audit[5134]: USER_ACCT pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:00.047053 kernel: audit: type=1101 audit(1769492520.040:799): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:00.047608 sshd[5134]: Accepted publickey for core from 4.153.228.146 port 46028 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:00.048662 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:00.046000 audit[5134]: CRED_ACQ pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:00.056058 kernel: audit: type=1103 audit(1769492520.046:800): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:00.064091 kernel: audit: type=1006 audit(1769492520.046:801): pid=5134 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 27 05:42:00.064170 kernel: audit: type=1300 audit(1769492520.046:801): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd10c75760 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:00.046000 audit[5134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd10c75760 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:00.063706 systemd-logind[1655]: New session 14 of user core. Jan 27 05:42:00.069629 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 27 05:42:00.046000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:00.074052 kernel: audit: type=1327 audit(1769492520.046:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:00.074000 audit[5134]: USER_START pid=5134 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:00.080056 kernel: audit: type=1105 audit(1769492520.074:802): pid=5134 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:00.081000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:00.086105 kernel: audit: type=1103 audit(1769492520.081:803): pid=5138 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:00.390137 kernel: audit: type=1130 audit(1769492520.385:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.7.41:22-114.111.54.188:57222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:00.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.7.41:22-114.111.54.188:57222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:00.386283 systemd[1]: Started sshd@13-10.0.7.41:22-114.111.54.188:57222.service - OpenSSH per-connection server daemon (114.111.54.188:57222). Jan 27 05:42:00.395603 sshd[5138]: Connection closed by 4.153.228.146 port 46028 Jan 27 05:42:00.394314 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:00.402502 kernel: audit: type=1106 audit(1769492520.395:805): pid=5134 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:00.395000 audit[5134]: USER_END pid=5134 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:00.402183 systemd[1]: sshd@12-10.0.7.41:22-4.153.228.146:46028.service: Deactivated successfully. Jan 27 05:42:00.395000 audit[5134]: CRED_DISP pid=5134 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:00.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.7.41:22-4.153.228.146:46028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:00.404583 systemd[1]: session-14.scope: Deactivated successfully. Jan 27 05:42:00.410510 systemd-logind[1655]: Session 14 logged out. Waiting for processes to exit. Jan 27 05:42:00.412205 systemd-logind[1655]: Removed session 14. Jan 27 05:42:00.506219 systemd[1]: Started sshd@14-10.0.7.41:22-4.153.228.146:46030.service - OpenSSH per-connection server daemon (4.153.228.146:46030). Jan 27 05:42:00.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.7.41:22-4.153.228.146:46030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:01.051000 audit[5154]: USER_ACCT pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:01.053061 sshd[5154]: Accepted publickey for core from 4.153.228.146 port 46030 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:01.052000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:01.052000 audit[5154]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd9012450 a2=3 a3=0 items=0 ppid=1 pid=5154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:01.052000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:01.054338 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:01.059890 systemd-logind[1655]: New session 15 of user core. Jan 27 05:42:01.064227 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 27 05:42:01.067000 audit[5154]: USER_START pid=5154 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:01.068000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:01.215152 sshd[5147]: Invalid user admin from 114.111.54.188 port 57222 Jan 27 05:42:01.475000 audit[5147]: USER_ERR pid=5147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=114.111.54.188 addr=114.111.54.188 terminal=ssh res=failed' Jan 27 05:42:01.476575 sshd[5147]: Connection closed by invalid user admin 114.111.54.188 port 57222 [preauth] Jan 27 05:42:01.478823 systemd[1]: sshd@13-10.0.7.41:22-114.111.54.188:57222.service: Deactivated successfully. Jan 27 05:42:01.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.7.41:22-114.111.54.188:57222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:01.514908 kubelet[2895]: E0127 05:42:01.514805 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:42:01.708043 sshd[5160]: Connection closed by 4.153.228.146 port 46030 Jan 27 05:42:01.709257 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:01.711000 audit[5154]: USER_END pid=5154 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:01.711000 audit[5154]: CRED_DISP pid=5154 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:01.715587 systemd[1]: sshd@14-10.0.7.41:22-4.153.228.146:46030.service: Deactivated successfully. Jan 27 05:42:01.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.7.41:22-4.153.228.146:46030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:01.719014 systemd[1]: session-15.scope: Deactivated successfully. Jan 27 05:42:01.720412 systemd-logind[1655]: Session 15 logged out. Waiting for processes to exit. Jan 27 05:42:01.723522 systemd-logind[1655]: Removed session 15. Jan 27 05:42:01.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.7.41:22-4.153.228.146:46040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:01.813100 systemd[1]: Started sshd@15-10.0.7.41:22-4.153.228.146:46040.service - OpenSSH per-connection server daemon (4.153.228.146:46040). Jan 27 05:42:02.335000 audit[5172]: USER_ACCT pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:02.337303 sshd[5172]: Accepted publickey for core from 4.153.228.146 port 46040 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:02.336000 audit[5172]: CRED_ACQ pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:02.337000 audit[5172]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcca94aba0 a2=3 a3=0 items=0 ppid=1 pid=5172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:02.337000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:02.339057 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:02.343067 systemd-logind[1655]: New session 16 of user core. Jan 27 05:42:02.351202 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 27 05:42:02.353000 audit[5172]: USER_START pid=5172 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:02.354000 audit[5176]: CRED_ACQ pid=5176 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:03.124000 audit[5185]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:42:03.124000 audit[5185]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffef8e4b8f0 a2=0 a3=7ffef8e4b8dc items=0 ppid=2997 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:03.124000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:42:03.129000 audit[5185]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:42:03.129000 audit[5185]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffef8e4b8f0 a2=0 a3=0 items=0 ppid=2997 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:03.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:42:03.143000 audit[5187]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:42:03.143000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff6a8a6330 a2=0 a3=7fff6a8a631c items=0 ppid=2997 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:03.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:42:03.148758 sshd[5176]: Connection closed by 4.153.228.146 port 46040 Jan 27 05:42:03.148247 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:03.148000 audit[5187]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:42:03.148000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff6a8a6330 a2=0 a3=0 items=0 ppid=2997 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:03.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:42:03.149000 audit[5172]: USER_END pid=5172 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:03.149000 audit[5172]: CRED_DISP pid=5172 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:03.152980 systemd[1]: sshd@15-10.0.7.41:22-4.153.228.146:46040.service: Deactivated successfully. Jan 27 05:42:03.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.7.41:22-4.153.228.146:46040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:03.156298 systemd[1]: session-16.scope: Deactivated successfully. Jan 27 05:42:03.158178 systemd-logind[1655]: Session 16 logged out. Waiting for processes to exit. Jan 27 05:42:03.161375 systemd-logind[1655]: Removed session 16. Jan 27 05:42:03.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.7.41:22-4.153.228.146:46050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:03.256171 systemd[1]: Started sshd@16-10.0.7.41:22-4.153.228.146:46050.service - OpenSSH per-connection server daemon (4.153.228.146:46050). Jan 27 05:42:03.506677 kubelet[2895]: E0127 05:42:03.505985 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:42:03.792990 sshd[5192]: Accepted publickey for core from 4.153.228.146 port 46050 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:03.791000 audit[5192]: USER_ACCT pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:03.792000 audit[5192]: CRED_ACQ pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:03.792000 audit[5192]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff09746ba0 a2=3 a3=0 items=0 ppid=1 pid=5192 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:03.792000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:03.794841 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:03.804483 systemd-logind[1655]: New session 17 of user core. Jan 27 05:42:03.810422 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 27 05:42:03.812000 audit[5192]: USER_START pid=5192 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:03.815000 audit[5196]: CRED_ACQ pid=5196 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:04.289247 sshd[5196]: Connection closed by 4.153.228.146 port 46050 Jan 27 05:42:04.289794 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:04.290000 audit[5192]: USER_END pid=5192 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:04.290000 audit[5192]: CRED_DISP pid=5192 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:04.293750 systemd[1]: sshd@16-10.0.7.41:22-4.153.228.146:46050.service: Deactivated successfully. Jan 27 05:42:04.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.7.41:22-4.153.228.146:46050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:04.295545 systemd[1]: session-17.scope: Deactivated successfully. Jan 27 05:42:04.296324 systemd-logind[1655]: Session 17 logged out. Waiting for processes to exit. Jan 27 05:42:04.297807 systemd-logind[1655]: Removed session 17. Jan 27 05:42:04.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.7.41:22-4.153.228.146:46052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:04.397482 systemd[1]: Started sshd@17-10.0.7.41:22-4.153.228.146:46052.service - OpenSSH per-connection server daemon (4.153.228.146:46052). Jan 27 05:42:04.963306 sshd[5206]: Accepted publickey for core from 4.153.228.146 port 46052 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:04.962000 audit[5206]: USER_ACCT pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:04.965288 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 27 05:42:04.965457 kernel: audit: type=1101 audit(1769492524.962:842): pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:04.966880 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:04.964000 audit[5206]: CRED_ACQ pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:04.977657 systemd-logind[1655]: New session 18 of user core. Jan 27 05:42:04.978047 kernel: audit: type=1103 audit(1769492524.964:843): pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:04.984057 kernel: audit: type=1006 audit(1769492524.964:844): pid=5206 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 27 05:42:04.984405 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 27 05:42:04.964000 audit[5206]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9ed902b0 a2=3 a3=0 items=0 ppid=1 pid=5206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:04.990047 kernel: audit: type=1300 audit(1769492524.964:844): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9ed902b0 a2=3 a3=0 items=0 ppid=1 pid=5206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:04.964000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:04.997059 kernel: audit: type=1327 audit(1769492524.964:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:04.991000 audit[5206]: USER_START pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:05.003068 kernel: audit: type=1105 audit(1769492524.991:845): pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:04.994000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:05.009053 kernel: audit: type=1103 audit(1769492524.994:846): pid=5233 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:05.130642 update_engine[1659]: I20260127 05:42:05.130076 1659 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 27 05:42:05.130642 update_engine[1659]: I20260127 05:42:05.130195 1659 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 27 05:42:05.130642 update_engine[1659]: I20260127 05:42:05.130596 1659 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 27 05:42:05.137804 update_engine[1659]: E20260127 05:42:05.137652 1659 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 27 05:42:05.137804 update_engine[1659]: I20260127 05:42:05.137774 1659 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 27 05:42:05.357739 sshd[5233]: Connection closed by 4.153.228.146 port 46052 Jan 27 05:42:05.360273 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:05.369141 kernel: audit: type=1106 audit(1769492525.361:847): pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:05.361000 audit[5206]: USER_END pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:05.371685 systemd-logind[1655]: Session 18 logged out. Waiting for processes to exit. Jan 27 05:42:05.372097 systemd[1]: sshd@17-10.0.7.41:22-4.153.228.146:46052.service: Deactivated successfully. Jan 27 05:42:05.374810 systemd[1]: session-18.scope: Deactivated successfully. Jan 27 05:42:05.381488 kernel: audit: type=1104 audit(1769492525.366:848): pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:05.366000 audit[5206]: CRED_DISP pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:05.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.7.41:22-4.153.228.146:46052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:05.382179 systemd-logind[1655]: Removed session 18. Jan 27 05:42:05.386200 kernel: audit: type=1131 audit(1769492525.371:849): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.7.41:22-4.153.228.146:46052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:05.507537 kubelet[2895]: E0127 05:42:05.507477 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:42:07.505429 kubelet[2895]: E0127 05:42:07.505145 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:42:08.070000 audit[5246]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:42:08.070000 audit[5246]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe328d6f30 a2=0 a3=7ffe328d6f1c items=0 ppid=2997 pid=5246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:08.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:42:08.078000 audit[5246]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=5246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:42:08.078000 audit[5246]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe328d6f30 a2=0 a3=7ffe328d6f1c items=0 ppid=2997 pid=5246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:08.078000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:42:10.470014 systemd[1]: Started sshd@18-10.0.7.41:22-4.153.228.146:39176.service - OpenSSH per-connection server daemon (4.153.228.146:39176). Jan 27 05:42:10.472088 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 27 05:42:10.472648 kernel: audit: type=1130 audit(1769492530.469:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.7.41:22-4.153.228.146:39176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:10.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.7.41:22-4.153.228.146:39176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:10.505810 kubelet[2895]: E0127 05:42:10.505778 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:42:10.507251 kubelet[2895]: E0127 05:42:10.506441 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:42:11.035000 audit[5249]: USER_ACCT pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.038573 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:11.039553 sshd[5249]: Accepted publickey for core from 4.153.228.146 port 39176 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:11.036000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.045808 kernel: audit: type=1101 audit(1769492531.035:853): pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.045897 kernel: audit: type=1103 audit(1769492531.036:854): pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.051248 systemd-logind[1655]: New session 19 of user core. Jan 27 05:42:11.051504 kernel: audit: type=1006 audit(1769492531.036:855): pid=5249 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 27 05:42:11.036000 audit[5249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe45a42900 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:11.056020 kernel: audit: type=1300 audit(1769492531.036:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe45a42900 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:11.057409 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 27 05:42:11.036000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:11.060622 kernel: audit: type=1327 audit(1769492531.036:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:11.062000 audit[5249]: USER_START pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.065000 audit[5253]: CRED_ACQ pid=5253 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.070675 kernel: audit: type=1105 audit(1769492531.062:856): pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.070738 kernel: audit: type=1103 audit(1769492531.065:857): pid=5253 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.422506 sshd[5253]: Connection closed by 4.153.228.146 port 39176 Jan 27 05:42:11.424512 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:11.424000 audit[5249]: USER_END pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.428944 systemd[1]: sshd@18-10.0.7.41:22-4.153.228.146:39176.service: Deactivated successfully. Jan 27 05:42:11.431367 kernel: audit: type=1106 audit(1769492531.424:858): pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.432280 systemd[1]: session-19.scope: Deactivated successfully. Jan 27 05:42:11.424000 audit[5249]: CRED_DISP pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.437096 kernel: audit: type=1104 audit(1769492531.424:859): pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:11.434148 systemd-logind[1655]: Session 19 logged out. Waiting for processes to exit. Jan 27 05:42:11.438050 systemd-logind[1655]: Removed session 19. Jan 27 05:42:11.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.7.41:22-4.153.228.146:39176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:14.505985 kubelet[2895]: E0127 05:42:14.505330 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:42:15.133919 update_engine[1659]: I20260127 05:42:15.133819 1659 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 27 05:42:15.134291 update_engine[1659]: I20260127 05:42:15.133939 1659 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 27 05:42:15.134463 update_engine[1659]: I20260127 05:42:15.134421 1659 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 27 05:42:15.140553 update_engine[1659]: E20260127 05:42:15.140470 1659 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 27 05:42:15.140773 update_engine[1659]: I20260127 05:42:15.140745 1659 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 27 05:42:16.506166 kubelet[2895]: E0127 05:42:16.505473 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:42:16.507547 kubelet[2895]: E0127 05:42:16.507477 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:42:16.541335 systemd[1]: Started sshd@19-10.0.7.41:22-4.153.228.146:56496.service - OpenSSH per-connection server daemon (4.153.228.146:56496). Jan 27 05:42:16.547627 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:42:16.547711 kernel: audit: type=1130 audit(1769492536.540:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.7.41:22-4.153.228.146:56496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:16.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.7.41:22-4.153.228.146:56496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:17.088000 audit[5265]: USER_ACCT pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:17.094512 sshd[5265]: Accepted publickey for core from 4.153.228.146 port 56496 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:17.095400 kernel: audit: type=1101 audit(1769492537.088:862): pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:17.094000 audit[5265]: CRED_ACQ pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:17.096892 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:17.102638 kernel: audit: type=1103 audit(1769492537.094:863): pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:17.102722 kernel: audit: type=1006 audit(1769492537.095:864): pid=5265 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 27 05:42:17.105069 kernel: audit: type=1300 audit(1769492537.095:864): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffe7fbb40 a2=3 a3=0 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:17.095000 audit[5265]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffe7fbb40 a2=3 a3=0 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:17.103224 systemd-logind[1655]: New session 20 of user core. Jan 27 05:42:17.095000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:17.108694 kernel: audit: type=1327 audit(1769492537.095:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:17.109247 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 27 05:42:17.113000 audit[5265]: USER_START pid=5265 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:17.116000 audit[5269]: CRED_ACQ pid=5269 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:17.120433 kernel: audit: type=1105 audit(1769492537.113:865): pid=5265 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:17.120481 kernel: audit: type=1103 audit(1769492537.116:866): pid=5269 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:17.465000 sshd[5269]: Connection closed by 4.153.228.146 port 56496 Jan 27 05:42:17.465966 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:17.466000 audit[5265]: USER_END pid=5265 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:17.471080 systemd-logind[1655]: Session 20 logged out. Waiting for processes to exit. Jan 27 05:42:17.472345 systemd[1]: sshd@19-10.0.7.41:22-4.153.228.146:56496.service: Deactivated successfully. Jan 27 05:42:17.474899 systemd[1]: session-20.scope: Deactivated successfully. Jan 27 05:42:17.476057 kernel: audit: type=1106 audit(1769492537.466:867): pid=5265 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:17.466000 audit[5265]: CRED_DISP pid=5265 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:17.478191 systemd-logind[1655]: Removed session 20. Jan 27 05:42:17.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.7.41:22-4.153.228.146:56496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:17.482046 kernel: audit: type=1104 audit(1769492537.466:868): pid=5265 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:20.505228 kubelet[2895]: E0127 05:42:20.505180 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:42:22.505286 kubelet[2895]: E0127 05:42:22.505234 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:42:22.579252 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:42:22.579365 kernel: audit: type=1130 audit(1769492542.576:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.7.41:22-4.153.228.146:56502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:22.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.7.41:22-4.153.228.146:56502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:22.577282 systemd[1]: Started sshd@20-10.0.7.41:22-4.153.228.146:56502.service - OpenSSH per-connection server daemon (4.153.228.146:56502). Jan 27 05:42:23.132000 audit[5280]: USER_ACCT pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.135196 sshd[5280]: Accepted publickey for core from 4.153.228.146 port 56502 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:23.137714 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:23.139097 kernel: audit: type=1101 audit(1769492543.132:871): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.135000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.144917 kernel: audit: type=1103 audit(1769492543.135:872): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.145001 kernel: audit: type=1006 audit(1769492543.135:873): pid=5280 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 27 05:42:23.147555 kernel: audit: type=1300 audit(1769492543.135:873): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4d1cbe20 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:23.135000 audit[5280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4d1cbe20 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:23.146878 systemd-logind[1655]: New session 21 of user core. Jan 27 05:42:23.135000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:23.152211 kernel: audit: type=1327 audit(1769492543.135:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:23.155353 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 27 05:42:23.166588 kernel: audit: type=1105 audit(1769492543.158:874): pid=5280 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.158000 audit[5280]: USER_START pid=5280 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.161000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.173051 kernel: audit: type=1103 audit(1769492543.161:875): pid=5284 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.499790 sshd[5284]: Connection closed by 4.153.228.146 port 56502 Jan 27 05:42:23.501284 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:23.505000 audit[5280]: USER_END pid=5280 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.507000 audit[5280]: CRED_DISP pid=5280 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.511732 kubelet[2895]: E0127 05:42:23.507215 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:42:23.514053 kernel: audit: type=1106 audit(1769492543.505:876): pid=5280 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.514130 kernel: audit: type=1104 audit(1769492543.507:877): pid=5280 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:23.515824 systemd[1]: sshd@20-10.0.7.41:22-4.153.228.146:56502.service: Deactivated successfully. Jan 27 05:42:23.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.7.41:22-4.153.228.146:56502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:23.519777 systemd[1]: session-21.scope: Deactivated successfully. Jan 27 05:42:23.521807 systemd-logind[1655]: Session 21 logged out. Waiting for processes to exit. Jan 27 05:42:23.522460 systemd-logind[1655]: Removed session 21. Jan 27 05:42:25.132172 update_engine[1659]: I20260127 05:42:25.131695 1659 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 27 05:42:25.132172 update_engine[1659]: I20260127 05:42:25.131778 1659 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 27 05:42:25.132172 update_engine[1659]: I20260127 05:42:25.132126 1659 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 27 05:42:25.139578 update_engine[1659]: E20260127 05:42:25.139545 1659 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 27 05:42:25.139741 update_engine[1659]: I20260127 05:42:25.139727 1659 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 27 05:42:25.140138 update_engine[1659]: I20260127 05:42:25.139770 1659 omaha_request_action.cc:617] Omaha request response: Jan 27 05:42:25.140138 update_engine[1659]: E20260127 05:42:25.139834 1659 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 27 05:42:25.140138 update_engine[1659]: I20260127 05:42:25.139851 1659 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 27 05:42:25.140138 update_engine[1659]: I20260127 05:42:25.139856 1659 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 27 05:42:25.140138 update_engine[1659]: I20260127 05:42:25.139861 1659 update_attempter.cc:306] Processing Done. Jan 27 05:42:25.140138 update_engine[1659]: E20260127 05:42:25.139874 1659 update_attempter.cc:619] Update failed. Jan 27 05:42:25.140138 update_engine[1659]: I20260127 05:42:25.139879 1659 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 27 05:42:25.140138 update_engine[1659]: I20260127 05:42:25.139884 1659 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 27 05:42:25.140138 update_engine[1659]: I20260127 05:42:25.139892 1659 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 27 05:42:25.140138 update_engine[1659]: I20260127 05:42:25.139952 1659 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 27 05:42:25.140138 update_engine[1659]: I20260127 05:42:25.139974 1659 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 27 05:42:25.140138 update_engine[1659]: I20260127 05:42:25.139978 1659 omaha_request_action.cc:272] Request: Jan 27 05:42:25.140138 update_engine[1659]: Jan 27 05:42:25.140138 update_engine[1659]: Jan 27 05:42:25.140138 update_engine[1659]: Jan 27 05:42:25.140138 update_engine[1659]: Jan 27 05:42:25.140138 update_engine[1659]: Jan 27 05:42:25.140138 update_engine[1659]: Jan 27 05:42:25.140469 update_engine[1659]: I20260127 05:42:25.139986 1659 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 27 05:42:25.140469 update_engine[1659]: I20260127 05:42:25.140002 1659 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 27 05:42:25.140664 update_engine[1659]: I20260127 05:42:25.140648 1659 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 27 05:42:25.140838 locksmithd[1703]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 27 05:42:25.146108 update_engine[1659]: E20260127 05:42:25.146083 1659 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 27 05:42:25.146224 update_engine[1659]: I20260127 05:42:25.146211 1659 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 27 05:42:25.146258 update_engine[1659]: I20260127 05:42:25.146250 1659 omaha_request_action.cc:617] Omaha request response: Jan 27 05:42:25.146381 update_engine[1659]: I20260127 05:42:25.146291 1659 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 27 05:42:25.146381 update_engine[1659]: I20260127 05:42:25.146297 1659 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 27 05:42:25.146381 update_engine[1659]: I20260127 05:42:25.146302 1659 update_attempter.cc:306] Processing Done. Jan 27 05:42:25.146381 update_engine[1659]: I20260127 05:42:25.146307 1659 update_attempter.cc:310] Error event sent. Jan 27 05:42:25.146381 update_engine[1659]: I20260127 05:42:25.146315 1659 update_check_scheduler.cc:74] Next update check in 40m35s Jan 27 05:42:25.146616 locksmithd[1703]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 27 05:42:28.505284 kubelet[2895]: E0127 05:42:28.504984 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:42:28.605606 systemd[1]: Started sshd@21-10.0.7.41:22-4.153.228.146:42912.service - OpenSSH per-connection server daemon (4.153.228.146:42912). Jan 27 05:42:28.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.7.41:22-4.153.228.146:42912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:28.606772 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:42:28.606853 kernel: audit: type=1130 audit(1769492548.604:879): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.7.41:22-4.153.228.146:42912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:29.141000 audit[5298]: USER_ACCT pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.144158 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:29.145541 sshd[5298]: Accepted publickey for core from 4.153.228.146 port 42912 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:29.147071 kernel: audit: type=1101 audit(1769492549.141:880): pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.152047 kernel: audit: type=1103 audit(1769492549.141:881): pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.141000 audit[5298]: CRED_ACQ pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.156766 systemd-logind[1655]: New session 22 of user core. Jan 27 05:42:29.162054 kernel: audit: type=1006 audit(1769492549.141:882): pid=5298 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 27 05:42:29.141000 audit[5298]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff70ccc330 a2=3 a3=0 items=0 ppid=1 pid=5298 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:29.167106 kernel: audit: type=1300 audit(1769492549.141:882): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff70ccc330 a2=3 a3=0 items=0 ppid=1 pid=5298 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:29.168352 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 27 05:42:29.141000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:29.172000 audit[5298]: USER_START pid=5298 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.177620 kernel: audit: type=1327 audit(1769492549.141:882): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:29.177654 kernel: audit: type=1105 audit(1769492549.172:883): pid=5298 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.179000 audit[5302]: CRED_ACQ pid=5302 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.187053 kernel: audit: type=1103 audit(1769492549.179:884): pid=5302 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.516059 sshd[5302]: Connection closed by 4.153.228.146 port 42912 Jan 27 05:42:29.516175 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:29.516000 audit[5298]: USER_END pid=5298 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.521626 systemd[1]: sshd@21-10.0.7.41:22-4.153.228.146:42912.service: Deactivated successfully. Jan 27 05:42:29.516000 audit[5298]: CRED_DISP pid=5298 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.525110 systemd[1]: session-22.scope: Deactivated successfully. Jan 27 05:42:29.525501 kernel: audit: type=1106 audit(1769492549.516:885): pid=5298 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.525555 kernel: audit: type=1104 audit(1769492549.516:886): pid=5298 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:29.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.7.41:22-4.153.228.146:42912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:29.527153 systemd-logind[1655]: Session 22 logged out. Waiting for processes to exit. Jan 27 05:42:29.530593 systemd-logind[1655]: Removed session 22. Jan 27 05:42:31.504993 kubelet[2895]: E0127 05:42:31.504894 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:42:31.506139 kubelet[2895]: E0127 05:42:31.506075 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:42:34.506139 kubelet[2895]: E0127 05:42:34.505392 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:42:34.506139 kubelet[2895]: E0127 05:42:34.505464 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:42:34.627845 systemd[1]: Started sshd@22-10.0.7.41:22-4.153.228.146:57282.service - OpenSSH per-connection server daemon (4.153.228.146:57282). Jan 27 05:42:34.631304 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:42:34.631333 kernel: audit: type=1130 audit(1769492554.626:888): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.7.41:22-4.153.228.146:57282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:34.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.7.41:22-4.153.228.146:57282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:35.159000 audit[5315]: USER_ACCT pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.161362 sshd[5315]: Accepted publickey for core from 4.153.228.146 port 57282 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:35.164000 audit[5315]: CRED_ACQ pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.167412 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:35.168901 kernel: audit: type=1101 audit(1769492555.159:889): pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.168955 kernel: audit: type=1103 audit(1769492555.164:890): pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.165000 audit[5315]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdac8ac5d0 a2=3 a3=0 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:35.176713 kernel: audit: type=1006 audit(1769492555.165:891): pid=5315 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 27 05:42:35.176764 kernel: audit: type=1300 audit(1769492555.165:891): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdac8ac5d0 a2=3 a3=0 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:35.165000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:35.180984 kernel: audit: type=1327 audit(1769492555.165:891): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:35.183384 systemd-logind[1655]: New session 23 of user core. Jan 27 05:42:35.189200 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 27 05:42:35.191000 audit[5315]: USER_START pid=5315 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.195000 audit[5344]: CRED_ACQ pid=5344 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.198534 kernel: audit: type=1105 audit(1769492555.191:892): pid=5315 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.198583 kernel: audit: type=1103 audit(1769492555.195:893): pid=5344 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.547920 sshd[5344]: Connection closed by 4.153.228.146 port 57282 Jan 27 05:42:35.549179 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:35.549000 audit[5315]: USER_END pid=5315 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.552406 systemd-logind[1655]: Session 23 logged out. Waiting for processes to exit. Jan 27 05:42:35.549000 audit[5315]: CRED_DISP pid=5315 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.554803 systemd[1]: sshd@22-10.0.7.41:22-4.153.228.146:57282.service: Deactivated successfully. Jan 27 05:42:35.556514 kernel: audit: type=1106 audit(1769492555.549:894): pid=5315 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.556560 kernel: audit: type=1104 audit(1769492555.549:895): pid=5315 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:35.557356 systemd[1]: session-23.scope: Deactivated successfully. Jan 27 05:42:35.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.7.41:22-4.153.228.146:57282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:35.560702 systemd-logind[1655]: Removed session 23. Jan 27 05:42:38.507512 kubelet[2895]: E0127 05:42:38.507461 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:42:43.509303 kubelet[2895]: E0127 05:42:43.509268 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:42:45.506969 kubelet[2895]: E0127 05:42:45.506536 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:42:45.508641 kubelet[2895]: E0127 05:42:45.508371 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:42:46.506069 kubelet[2895]: E0127 05:42:46.506026 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:42:48.506508 kubelet[2895]: E0127 05:42:48.506242 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:42:52.505709 containerd[1681]: time="2026-01-27T05:42:52.505632839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 05:42:52.855137 containerd[1681]: time="2026-01-27T05:42:52.854778473Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:52.856756 containerd[1681]: time="2026-01-27T05:42:52.856620944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 05:42:52.856756 containerd[1681]: time="2026-01-27T05:42:52.856635183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:52.856936 kubelet[2895]: E0127 05:42:52.856852 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:42:52.856936 kubelet[2895]: E0127 05:42:52.856894 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:42:52.857406 kubelet[2895]: E0127 05:42:52.857000 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nm94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:52.859193 containerd[1681]: time="2026-01-27T05:42:52.859162990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 05:42:53.213116 containerd[1681]: time="2026-01-27T05:42:53.212783307Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:53.214842 containerd[1681]: time="2026-01-27T05:42:53.214699264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 05:42:53.214842 containerd[1681]: time="2026-01-27T05:42:53.214807695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:53.215663 kubelet[2895]: E0127 05:42:53.215161 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:42:53.215663 kubelet[2895]: E0127 05:42:53.215215 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:42:53.215663 kubelet[2895]: E0127 05:42:53.215346 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nm94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mf6bj_calico-system(7ea10135-90f4-4815-b58a-eefd271d18ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:53.216841 kubelet[2895]: E0127 05:42:53.216789 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:42:57.505504 containerd[1681]: time="2026-01-27T05:42:57.505413118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 05:42:57.507171 kubelet[2895]: E0127 05:42:57.507126 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:42:57.850510 containerd[1681]: time="2026-01-27T05:42:57.850296488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:57.853310 containerd[1681]: time="2026-01-27T05:42:57.853151043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 05:42:57.853310 containerd[1681]: time="2026-01-27T05:42:57.853198920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:57.853769 kubelet[2895]: E0127 05:42:57.853686 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:42:57.853965 kubelet[2895]: E0127 05:42:57.853921 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:42:57.854475 kubelet[2895]: E0127 05:42:57.854364 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2fvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kbs8f_calico-system(22258eaf-cd76-4bd2-ad47-8f4a85b664bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:57.855759 kubelet[2895]: E0127 05:42:57.855704 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:42:58.505055 containerd[1681]: time="2026-01-27T05:42:58.505004200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 05:42:58.843669 containerd[1681]: time="2026-01-27T05:42:58.843498181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:58.845089 containerd[1681]: time="2026-01-27T05:42:58.845047680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 05:42:58.845176 containerd[1681]: time="2026-01-27T05:42:58.845119806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:58.845277 kubelet[2895]: E0127 05:42:58.845241 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:42:58.845575 kubelet[2895]: E0127 05:42:58.845289 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:42:58.845575 kubelet[2895]: E0127 05:42:58.845415 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jq9j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68f86b6c77-cbrw7_calico-system(cf580b0a-7ab1-4b43-ad9f-7219ad766e09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:58.846618 kubelet[2895]: E0127 05:42:58.846572 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:42:59.508545 kubelet[2895]: E0127 05:42:59.508487 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:43:00.638646 kubelet[2895]: E0127 05:43:00.638597 2895 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.7.41:57344->10.0.7.109:2379: read: connection timed out" Jan 27 05:43:01.450505 systemd[1]: cri-containerd-d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f.scope: Deactivated successfully. Jan 27 05:43:01.451419 systemd[1]: cri-containerd-d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f.scope: Consumed 3.318s CPU time, 59.2M memory peak, 192K read from disk. Jan 27 05:43:01.451000 audit: BPF prog-id=259 op=LOAD Jan 27 05:43:01.453261 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:43:01.453343 kernel: audit: type=1334 audit(1769492581.451:897): prog-id=259 op=LOAD Jan 27 05:43:01.456177 containerd[1681]: time="2026-01-27T05:43:01.455973482Z" level=info msg="received container exit event container_id:\"d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f\" id:\"d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f\" pid:2741 exit_status:1 exited_at:{seconds:1769492581 nanos:451955347}" Jan 27 05:43:01.456445 kernel: audit: type=1334 audit(1769492581.451:898): prog-id=91 op=UNLOAD Jan 27 05:43:01.451000 audit: BPF prog-id=91 op=UNLOAD Jan 27 05:43:01.456000 audit: BPF prog-id=106 op=UNLOAD Jan 27 05:43:01.456000 audit: BPF prog-id=110 op=UNLOAD Jan 27 05:43:01.461556 kernel: audit: type=1334 audit(1769492581.456:899): prog-id=106 op=UNLOAD Jan 27 05:43:01.461604 kernel: audit: type=1334 audit(1769492581.456:900): prog-id=110 op=UNLOAD Jan 27 05:43:01.482560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f-rootfs.mount: Deactivated successfully. Jan 27 05:43:01.916467 systemd[1]: cri-containerd-178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e.scope: Deactivated successfully. Jan 27 05:43:01.917362 systemd[1]: cri-containerd-178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e.scope: Consumed 23.243s CPU time, 112.4M memory peak. Jan 27 05:43:01.918449 containerd[1681]: time="2026-01-27T05:43:01.918016512Z" level=info msg="received container exit event container_id:\"178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e\" id:\"178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e\" pid:3213 exit_status:1 exited_at:{seconds:1769492581 nanos:917495606}" Jan 27 05:43:01.919000 audit: BPF prog-id=149 op=UNLOAD Jan 27 05:43:01.922059 kernel: audit: type=1334 audit(1769492581.919:901): prog-id=149 op=UNLOAD Jan 27 05:43:01.919000 audit: BPF prog-id=153 op=UNLOAD Jan 27 05:43:01.924265 kernel: audit: type=1334 audit(1769492581.919:902): prog-id=153 op=UNLOAD Jan 27 05:43:01.942025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e-rootfs.mount: Deactivated successfully. Jan 27 05:43:02.081321 kubelet[2895]: I0127 05:43:02.081264 2895 scope.go:117] "RemoveContainer" containerID="d58887ccd8e4f00fa6f385d083a74f9614d5b110bbabca36985593f2819ea59f" Jan 27 05:43:02.083265 containerd[1681]: time="2026-01-27T05:43:02.083237054Z" level=info msg="CreateContainer within sandbox \"724b0828ac89ae2dd32688129197a6677b2b34fbb8212bbc250e20b17cab4324\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 27 05:43:02.085461 kubelet[2895]: I0127 05:43:02.085440 2895 scope.go:117] "RemoveContainer" containerID="178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e" Jan 27 05:43:02.087144 containerd[1681]: time="2026-01-27T05:43:02.087118355Z" level=info msg="CreateContainer within sandbox \"ebd83958ea20ea74b4c1b8a0875638ecf2559ce38452a228618bb00ea717a8a9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 27 05:43:02.101001 containerd[1681]: time="2026-01-27T05:43:02.100563084Z" level=info msg="Container 7b1aa6a168d30235bc1d99a3c97e8a51e87c5281b429dffe0b023700f53a9b37: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:43:02.103605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896689799.mount: Deactivated successfully. Jan 27 05:43:02.106679 containerd[1681]: time="2026-01-27T05:43:02.106567664Z" level=info msg="Container 692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:43:02.114019 containerd[1681]: time="2026-01-27T05:43:02.113972801Z" level=info msg="CreateContainer within sandbox \"724b0828ac89ae2dd32688129197a6677b2b34fbb8212bbc250e20b17cab4324\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7b1aa6a168d30235bc1d99a3c97e8a51e87c5281b429dffe0b023700f53a9b37\"" Jan 27 05:43:02.114658 containerd[1681]: time="2026-01-27T05:43:02.114631263Z" level=info msg="StartContainer for \"7b1aa6a168d30235bc1d99a3c97e8a51e87c5281b429dffe0b023700f53a9b37\"" Jan 27 05:43:02.116262 containerd[1681]: time="2026-01-27T05:43:02.116236319Z" level=info msg="connecting to shim 7b1aa6a168d30235bc1d99a3c97e8a51e87c5281b429dffe0b023700f53a9b37" address="unix:///run/containerd/s/ffa770c716ab08b1817ab9dae90bd2804b8a7ecf562ed629cae86a5bc97a3afc" protocol=ttrpc version=3 Jan 27 05:43:02.119973 containerd[1681]: time="2026-01-27T05:43:02.119939532Z" level=info msg="CreateContainer within sandbox \"ebd83958ea20ea74b4c1b8a0875638ecf2559ce38452a228618bb00ea717a8a9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05\"" Jan 27 05:43:02.120663 containerd[1681]: time="2026-01-27T05:43:02.120359914Z" level=info msg="StartContainer for \"692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05\"" Jan 27 05:43:02.121022 containerd[1681]: time="2026-01-27T05:43:02.120998563Z" level=info msg="connecting to shim 692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05" address="unix:///run/containerd/s/49bc2908d2dcc38764c3306e1a971d8802cc87595b2240df52cd5c4d6406d03c" protocol=ttrpc version=3 Jan 27 05:43:02.140255 systemd[1]: Started cri-containerd-7b1aa6a168d30235bc1d99a3c97e8a51e87c5281b429dffe0b023700f53a9b37.scope - libcontainer container 7b1aa6a168d30235bc1d99a3c97e8a51e87c5281b429dffe0b023700f53a9b37. Jan 27 05:43:02.144756 systemd[1]: Started cri-containerd-692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05.scope - libcontainer container 692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05. Jan 27 05:43:02.155000 audit: BPF prog-id=260 op=LOAD Jan 27 05:43:02.158161 kernel: audit: type=1334 audit(1769492582.155:903): prog-id=260 op=LOAD Jan 27 05:43:02.157000 audit: BPF prog-id=261 op=LOAD Jan 27 05:43:02.157000 audit[5395]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2590 pid=5395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.161396 kernel: audit: type=1334 audit(1769492582.157:904): prog-id=261 op=LOAD Jan 27 05:43:02.161445 kernel: audit: type=1300 audit(1769492582.157:904): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2590 pid=5395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316161366131363864333032333562633164393961336339376538 Jan 27 05:43:02.169054 kernel: audit: type=1327 audit(1769492582.157:904): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316161366131363864333032333562633164393961336339376538 Jan 27 05:43:02.157000 audit: BPF prog-id=261 op=UNLOAD Jan 27 05:43:02.157000 audit[5395]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2590 pid=5395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316161366131363864333032333562633164393961336339376538 Jan 27 05:43:02.157000 audit: BPF prog-id=262 op=LOAD Jan 27 05:43:02.157000 audit[5395]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2590 pid=5395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316161366131363864333032333562633164393961336339376538 Jan 27 05:43:02.157000 audit: BPF prog-id=263 op=LOAD Jan 27 05:43:02.157000 audit[5395]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2590 pid=5395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316161366131363864333032333562633164393961336339376538 Jan 27 05:43:02.157000 audit: BPF prog-id=263 op=UNLOAD Jan 27 05:43:02.157000 audit[5395]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2590 pid=5395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316161366131363864333032333562633164393961336339376538 Jan 27 05:43:02.157000 audit: BPF prog-id=262 op=UNLOAD Jan 27 05:43:02.157000 audit[5395]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2590 pid=5395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316161366131363864333032333562633164393961336339376538 Jan 27 05:43:02.157000 audit: BPF prog-id=264 op=LOAD Jan 27 05:43:02.157000 audit[5395]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2590 pid=5395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316161366131363864333032333562633164393961336339376538 Jan 27 05:43:02.171000 audit: BPF prog-id=265 op=LOAD Jan 27 05:43:02.171000 audit: BPF prog-id=266 op=LOAD Jan 27 05:43:02.171000 audit[5401]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3012 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323739396339353262656432326332623662396365396166613433 Jan 27 05:43:02.171000 audit: BPF prog-id=266 op=UNLOAD Jan 27 05:43:02.171000 audit[5401]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323739396339353262656432326332623662396365396166613433 Jan 27 05:43:02.171000 audit: BPF prog-id=267 op=LOAD Jan 27 05:43:02.171000 audit[5401]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3012 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323739396339353262656432326332623662396365396166613433 Jan 27 05:43:02.172000 audit: BPF prog-id=268 op=LOAD Jan 27 05:43:02.172000 audit[5401]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3012 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323739396339353262656432326332623662396365396166613433 Jan 27 05:43:02.172000 audit: BPF prog-id=268 op=UNLOAD Jan 27 05:43:02.172000 audit[5401]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323739396339353262656432326332623662396365396166613433 Jan 27 05:43:02.172000 audit: BPF prog-id=267 op=UNLOAD Jan 27 05:43:02.172000 audit[5401]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323739396339353262656432326332623662396365396166613433 Jan 27 05:43:02.172000 audit: BPF prog-id=269 op=LOAD Jan 27 05:43:02.172000 audit[5401]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3012 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639323739396339353262656432326332623662396365396166613433 Jan 27 05:43:02.197325 containerd[1681]: time="2026-01-27T05:43:02.197240596Z" level=info msg="StartContainer for \"692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05\" returns successfully" Jan 27 05:43:02.218767 containerd[1681]: time="2026-01-27T05:43:02.218683510Z" level=info msg="StartContainer for \"7b1aa6a168d30235bc1d99a3c97e8a51e87c5281b429dffe0b023700f53a9b37\" returns successfully" Jan 27 05:43:03.009276 systemd[1]: Started sshd@23-10.0.7.41:22-114.111.54.188:47048.service - OpenSSH per-connection server daemon (114.111.54.188:47048). Jan 27 05:43:03.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.7.41:22-114.111.54.188:47048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:03.506491 containerd[1681]: time="2026-01-27T05:43:03.506242678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:43:03.837124 sshd[5460]: Invalid user orangepi from 114.111.54.188 port 47048 Jan 27 05:43:03.851021 containerd[1681]: time="2026-01-27T05:43:03.850752131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:43:03.853675 containerd[1681]: time="2026-01-27T05:43:03.853518018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:43:03.853675 containerd[1681]: time="2026-01-27T05:43:03.853560420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:43:03.854302 kubelet[2895]: E0127 05:43:03.853819 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:43:03.854302 kubelet[2895]: E0127 05:43:03.853874 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:43:03.854302 kubelet[2895]: E0127 05:43:03.854077 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ptxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9df44df9-7r4l6_calico-apiserver(2a3676d6-dbc6-4326-8dba-e3375f935a86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:43:03.855337 kubelet[2895]: E0127 05:43:03.855275 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-7r4l6" podUID="2a3676d6-dbc6-4326-8dba-e3375f935a86" Jan 27 05:43:04.035019 sshd[5460]: Connection closed by invalid user orangepi 114.111.54.188 port 47048 [preauth] Jan 27 05:43:04.034000 audit[5460]: USER_ERR pid=5460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=114.111.54.188 addr=114.111.54.188 terminal=ssh res=failed' Jan 27 05:43:04.038020 systemd[1]: sshd@23-10.0.7.41:22-114.111.54.188:47048.service: Deactivated successfully. Jan 27 05:43:04.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.7.41:22-114.111.54.188:47048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:05.491346 kubelet[2895]: E0127 05:43:05.491223 2895 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.7.41:57186->10.0.7.109:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4592-0-0-n-eb4c5d05b1.188e801f2e7dd768 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4592-0-0-n-eb4c5d05b1,UID:407560494481a0ed4e30f3d50b60939f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4592-0-0-n-eb4c5d05b1,},FirstTimestamp:2026-01-27 05:42:55.0155242 +0000 UTC m=+209.610022796,LastTimestamp:2026-01-27 05:42:55.0155242 +0000 UTC m=+209.610022796,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4592-0-0-n-eb4c5d05b1,}" Jan 27 05:43:05.505761 kubelet[2895]: E0127 05:43:05.505512 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mf6bj" podUID="7ea10135-90f4-4815-b58a-eefd271d18ce" Jan 27 05:43:05.754165 systemd[1]: cri-containerd-534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4.scope: Deactivated successfully. Jan 27 05:43:05.754000 audit: BPF prog-id=270 op=LOAD Jan 27 05:43:05.754000 audit: BPF prog-id=86 op=UNLOAD Jan 27 05:43:05.754974 systemd[1]: cri-containerd-534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4.scope: Consumed 2.821s CPU time, 24M memory peak, 256K read from disk. Jan 27 05:43:05.757000 audit: BPF prog-id=101 op=UNLOAD Jan 27 05:43:05.757000 audit: BPF prog-id=105 op=UNLOAD Jan 27 05:43:05.758724 containerd[1681]: time="2026-01-27T05:43:05.758670940Z" level=info msg="received container exit event container_id:\"534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4\" id:\"534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4\" pid:2714 exit_status:1 exited_at:{seconds:1769492585 nanos:757709487}" Jan 27 05:43:05.782641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4-rootfs.mount: Deactivated successfully. Jan 27 05:43:06.100935 kubelet[2895]: I0127 05:43:06.100840 2895 scope.go:117] "RemoveContainer" containerID="534f5c98227382b93206b6dfe0e0dbf4bfe84086740289ef5a75f31bacd5f5a4" Jan 27 05:43:06.105385 containerd[1681]: time="2026-01-27T05:43:06.105350924Z" level=info msg="CreateContainer within sandbox \"154e4f6b0139dace890c42981ea89f17a4d7e7e3629e83c44e2716a117464170\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 27 05:43:06.119282 containerd[1681]: time="2026-01-27T05:43:06.119243569Z" level=info msg="Container 470e74e1df1110676efb00891c906335109b93aa263263dbab7888136a03dc35: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:43:06.120888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206102618.mount: Deactivated successfully. Jan 27 05:43:06.129185 containerd[1681]: time="2026-01-27T05:43:06.129133746Z" level=info msg="CreateContainer within sandbox \"154e4f6b0139dace890c42981ea89f17a4d7e7e3629e83c44e2716a117464170\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"470e74e1df1110676efb00891c906335109b93aa263263dbab7888136a03dc35\"" Jan 27 05:43:06.129798 containerd[1681]: time="2026-01-27T05:43:06.129775045Z" level=info msg="StartContainer for \"470e74e1df1110676efb00891c906335109b93aa263263dbab7888136a03dc35\"" Jan 27 05:43:06.130801 containerd[1681]: time="2026-01-27T05:43:06.130764197Z" level=info msg="connecting to shim 470e74e1df1110676efb00891c906335109b93aa263263dbab7888136a03dc35" address="unix:///run/containerd/s/ee076ce4bfd52c155852d20b848fa20667053fcbe35ab1f5a29543c353e6a34c" protocol=ttrpc version=3 Jan 27 05:43:06.154242 systemd[1]: Started cri-containerd-470e74e1df1110676efb00891c906335109b93aa263263dbab7888136a03dc35.scope - libcontainer container 470e74e1df1110676efb00891c906335109b93aa263263dbab7888136a03dc35. Jan 27 05:43:06.164000 audit: BPF prog-id=271 op=LOAD Jan 27 05:43:06.165000 audit: BPF prog-id=272 op=LOAD Jan 27 05:43:06.165000 audit[5508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2571 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:06.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437306537346531646631313130363736656662303038393163393036 Jan 27 05:43:06.165000 audit: BPF prog-id=272 op=UNLOAD Jan 27 05:43:06.165000 audit[5508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:06.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437306537346531646631313130363736656662303038393163393036 Jan 27 05:43:06.165000 audit: BPF prog-id=273 op=LOAD Jan 27 05:43:06.165000 audit[5508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2571 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:06.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437306537346531646631313130363736656662303038393163393036 Jan 27 05:43:06.165000 audit: BPF prog-id=274 op=LOAD Jan 27 05:43:06.165000 audit[5508]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2571 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:06.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437306537346531646631313130363736656662303038393163393036 Jan 27 05:43:06.165000 audit: BPF prog-id=274 op=UNLOAD Jan 27 05:43:06.165000 audit[5508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:06.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437306537346531646631313130363736656662303038393163393036 Jan 27 05:43:06.165000 audit: BPF prog-id=273 op=UNLOAD Jan 27 05:43:06.165000 audit[5508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:06.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437306537346531646631313130363736656662303038393163393036 Jan 27 05:43:06.165000 audit: BPF prog-id=275 op=LOAD Jan 27 05:43:06.165000 audit[5508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2571 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:06.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437306537346531646631313130363736656662303038393163393036 Jan 27 05:43:06.209140 containerd[1681]: time="2026-01-27T05:43:06.209105328Z" level=info msg="StartContainer for \"470e74e1df1110676efb00891c906335109b93aa263263dbab7888136a03dc35\" returns successfully" Jan 27 05:43:09.396433 kubelet[2895]: I0127 05:43:09.396352 2895 status_manager.go:890] "Failed to get status for pod" podUID="75bcf10f02a58c1e2b6f37ff11a1481d" pod="kube-system/kube-controller-manager-ci-4592-0-0-n-eb4c5d05b1" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.7.41:57264->10.0.7.109:2379: read: connection timed out" Jan 27 05:43:10.639856 kubelet[2895]: E0127 05:43:10.639108 2895 controller.go:195] "Failed to update lease" err="Put \"https://10.0.7.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4592-0-0-n-eb4c5d05b1?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 05:43:11.505617 containerd[1681]: time="2026-01-27T05:43:11.505358783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:43:11.844800 containerd[1681]: time="2026-01-27T05:43:11.844648755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:43:11.846462 containerd[1681]: time="2026-01-27T05:43:11.846418074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:43:11.846754 containerd[1681]: time="2026-01-27T05:43:11.846495313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:43:11.846796 kubelet[2895]: E0127 05:43:11.846635 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:43:11.846796 kubelet[2895]: E0127 05:43:11.846683 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:43:11.847447 kubelet[2895]: E0127 05:43:11.847390 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz57s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9df44df9-9vzkl_calico-apiserver(7997e895-ab0d-47da-83eb-264fa47d7c87): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:43:11.848587 kubelet[2895]: E0127 05:43:11.848546 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9df44df9-9vzkl" podUID="7997e895-ab0d-47da-83eb-264fa47d7c87" Jan 27 05:43:12.504662 kubelet[2895]: E0127 05:43:12.504574 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kbs8f" podUID="22258eaf-cd76-4bd2-ad47-8f4a85b664bd" Jan 27 05:43:12.505243 containerd[1681]: time="2026-01-27T05:43:12.504987049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 05:43:12.849139 containerd[1681]: time="2026-01-27T05:43:12.848965302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:43:12.850784 containerd[1681]: time="2026-01-27T05:43:12.850749714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 05:43:12.851078 containerd[1681]: time="2026-01-27T05:43:12.850830171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 05:43:12.851148 kubelet[2895]: E0127 05:43:12.850972 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:43:12.851573 kubelet[2895]: E0127 05:43:12.851021 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:43:12.851714 kubelet[2895]: E0127 05:43:12.851664 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:57b1faf7f7fc40c5b00dfb0c507a2180,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdngs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bd985ccf-jxmbn_calico-system(030cf2c9-9900-4225-8b2a-d77c13f08480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 05:43:12.853563 containerd[1681]: time="2026-01-27T05:43:12.853534684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 05:43:13.186491 containerd[1681]: time="2026-01-27T05:43:13.186377280Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:43:13.187947 containerd[1681]: time="2026-01-27T05:43:13.187897129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 05:43:13.188066 containerd[1681]: time="2026-01-27T05:43:13.187972325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 05:43:13.188245 kubelet[2895]: E0127 05:43:13.188155 2895 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:43:13.188245 kubelet[2895]: E0127 05:43:13.188196 2895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:43:13.188405 kubelet[2895]: E0127 05:43:13.188296 2895 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdngs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bd985ccf-jxmbn_calico-system(030cf2c9-9900-4225-8b2a-d77c13f08480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 05:43:13.189469 kubelet[2895]: E0127 05:43:13.189438 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bd985ccf-jxmbn" podUID="030cf2c9-9900-4225-8b2a-d77c13f08480" Jan 27 05:43:13.397996 containerd[1681]: time="2026-01-27T05:43:13.397649472Z" level=info msg="received container exit event container_id:\"692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05\" id:\"692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05\" pid:5431 exit_status:1 exited_at:{seconds:1769492593 nanos:397378814}" Jan 27 05:43:13.397911 systemd[1]: cri-containerd-692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05.scope: Deactivated successfully. Jan 27 05:43:13.407968 kernel: kauditd_printk_skb: 69 callbacks suppressed Jan 27 05:43:13.408049 kernel: audit: type=1334 audit(1769492593.400:934): prog-id=265 op=UNLOAD Jan 27 05:43:13.400000 audit: BPF prog-id=265 op=UNLOAD Jan 27 05:43:13.400000 audit: BPF prog-id=269 op=UNLOAD Jan 27 05:43:13.414100 kernel: audit: type=1334 audit(1769492593.400:935): prog-id=269 op=UNLOAD Jan 27 05:43:13.439295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05-rootfs.mount: Deactivated successfully. Jan 27 05:43:13.505543 kubelet[2895]: E0127 05:43:13.505465 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68f86b6c77-cbrw7" podUID="cf580b0a-7ab1-4b43-ad9f-7219ad766e09" Jan 27 05:43:14.123489 kubelet[2895]: I0127 05:43:14.123461 2895 scope.go:117] "RemoveContainer" containerID="178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e" Jan 27 05:43:14.123829 kubelet[2895]: I0127 05:43:14.123708 2895 scope.go:117] "RemoveContainer" containerID="692799c952bed22c2b6b9ce9afa4318b814cadb89dd896c12517d6d1ff021e05" Jan 27 05:43:14.123861 kubelet[2895]: E0127 05:43:14.123830 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-msnr7_tigera-operator(52bb413b-3cc0-438f-b413-67e119c636e5)\"" pod="tigera-operator/tigera-operator-7dcd859c48-msnr7" podUID="52bb413b-3cc0-438f-b413-67e119c636e5" Jan 27 05:43:14.125048 containerd[1681]: time="2026-01-27T05:43:14.125006187Z" level=info msg="RemoveContainer for \"178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e\"" Jan 27 05:43:14.130666 containerd[1681]: time="2026-01-27T05:43:14.130589259Z" level=info msg="RemoveContainer for \"178159b7a34085f9fb34c33e50955f0695cc7b826018940b8b9e755ef9ca861e\" returns successfully"