Jan 20 06:54:16.037800 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 04:11:16 -00 2026 Jan 20 06:54:16.037832 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:54:16.037842 kernel: BIOS-provided physical RAM map: Jan 20 06:54:16.037849 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 06:54:16.037855 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 06:54:16.037861 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 06:54:16.037871 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 06:54:16.037877 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 06:54:16.037883 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 06:54:16.037889 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 06:54:16.037896 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e93efff] usable Jan 20 06:54:16.037902 kernel: BIOS-e820: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 20 06:54:16.037909 kernel: BIOS-e820: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 20 06:54:16.037915 kernel: BIOS-e820: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 20 06:54:16.037924 kernel: BIOS-e820: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 20 06:54:16.037932 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 20 06:54:16.037939 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 20 06:54:16.037945 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 20 06:54:16.037952 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 20 06:54:16.037959 kernel: BIOS-e820: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 20 06:54:16.037967 kernel: BIOS-e820: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 20 06:54:16.037974 kernel: BIOS-e820: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 20 06:54:16.037980 kernel: BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 20 06:54:16.037987 kernel: BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 20 06:54:16.037994 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 06:54:16.038000 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 06:54:16.038007 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 06:54:16.038013 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 20 06:54:16.038020 kernel: NX (Execute Disable) protection: active Jan 20 06:54:16.038026 kernel: APIC: Static calls initialized Jan 20 06:54:16.038033 kernel: e820: update [mem 0x7df7f018-0x7df88a57] usable ==> usable Jan 20 06:54:16.038042 kernel: e820: update [mem 0x7df57018-0x7df7e457] usable ==> usable Jan 20 06:54:16.038049 kernel: extended physical RAM map: Jan 20 06:54:16.038055 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 06:54:16.038062 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 06:54:16.038069 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 06:54:16.038076 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 06:54:16.038083 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 06:54:16.038089 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 06:54:16.038096 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 06:54:16.038108 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007df57017] usable Jan 20 06:54:16.038115 kernel: reserve setup_data: [mem 0x000000007df57018-0x000000007df7e457] usable Jan 20 06:54:16.038122 kernel: reserve setup_data: [mem 0x000000007df7e458-0x000000007df7f017] usable Jan 20 06:54:16.038129 kernel: reserve setup_data: [mem 0x000000007df7f018-0x000000007df88a57] usable Jan 20 06:54:16.038137 kernel: reserve setup_data: [mem 0x000000007df88a58-0x000000007e93efff] usable Jan 20 06:54:16.038144 kernel: reserve setup_data: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 20 06:54:16.038151 kernel: reserve setup_data: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 20 06:54:16.038158 kernel: reserve setup_data: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 20 06:54:16.038166 kernel: reserve setup_data: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 20 06:54:16.038173 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 20 06:54:16.038180 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 20 06:54:16.038187 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 20 06:54:16.038194 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 20 06:54:16.038201 kernel: reserve setup_data: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 20 06:54:16.038208 kernel: reserve setup_data: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 20 06:54:16.038216 kernel: reserve setup_data: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 20 06:54:16.038223 kernel: reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 20 06:54:16.038230 kernel: reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 20 06:54:16.038237 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 06:54:16.038244 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 06:54:16.038251 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 06:54:16.038258 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 20 06:54:16.038282 kernel: efi: EFI v2.7 by EDK II Jan 20 06:54:16.038289 kernel: efi: SMBIOS=0x7f972000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7dfd8018 RNG=0x7fb72018 Jan 20 06:54:16.038296 kernel: random: crng init done Jan 20 06:54:16.038303 kernel: efi: Remove mem139: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 20 06:54:16.038312 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 20 06:54:16.038319 kernel: secureboot: Secure boot disabled Jan 20 06:54:16.038326 kernel: SMBIOS 2.8 present. Jan 20 06:54:16.038333 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 20 06:54:16.038341 kernel: DMI: Memory slots populated: 1/1 Jan 20 06:54:16.038347 kernel: Hypervisor detected: KVM Jan 20 06:54:16.038355 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 20 06:54:16.038362 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 06:54:16.038369 kernel: kvm-clock: using sched offset of 5627201981 cycles Jan 20 06:54:16.038376 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 06:54:16.038386 kernel: tsc: Detected 2294.608 MHz processor Jan 20 06:54:16.038394 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 06:54:16.038402 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 06:54:16.038409 kernel: last_pfn = 0x180000 max_arch_pfn = 0x10000000000 Jan 20 06:54:16.038417 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 06:54:16.038425 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 06:54:16.038432 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 20 06:54:16.038440 kernel: Using GB pages for direct mapping Jan 20 06:54:16.038449 kernel: ACPI: Early table checksum verification disabled Jan 20 06:54:16.038456 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 20 06:54:16.038464 kernel: ACPI: XSDT 0x000000007FB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jan 20 06:54:16.038472 kernel: ACPI: FACP 0x000000007FB77000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:54:16.038479 kernel: ACPI: DSDT 0x000000007FB78000 00423C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:54:16.038487 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 20 06:54:16.038495 kernel: ACPI: APIC 0x000000007FB76000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:54:16.038504 kernel: ACPI: MCFG 0x000000007FB75000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:54:16.038511 kernel: ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:54:16.038519 kernel: ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 06:54:16.038526 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb77000-0x7fb770f3] Jan 20 06:54:16.038533 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb78000-0x7fb7c23b] Jan 20 06:54:16.038541 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 20 06:54:16.038548 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb76000-0x7fb7607f] Jan 20 06:54:16.038557 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb75000-0x7fb7503b] Jan 20 06:54:16.038565 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027] Jan 20 06:54:16.038573 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037] Jan 20 06:54:16.038580 kernel: No NUMA configuration found Jan 20 06:54:16.038588 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 20 06:54:16.038595 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jan 20 06:54:16.038603 kernel: Zone ranges: Jan 20 06:54:16.038610 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 06:54:16.038619 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 20 06:54:16.038627 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 20 06:54:16.038634 kernel: Device empty Jan 20 06:54:16.038642 kernel: Movable zone start for each node Jan 20 06:54:16.038650 kernel: Early memory node ranges Jan 20 06:54:16.038657 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 06:54:16.038665 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 06:54:16.038672 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 06:54:16.038681 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 20 06:54:16.038689 kernel: node 0: [mem 0x0000000000900000-0x000000007e93efff] Jan 20 06:54:16.038696 kernel: node 0: [mem 0x000000007ea00000-0x000000007ec70fff] Jan 20 06:54:16.038703 kernel: node 0: [mem 0x000000007ed85000-0x000000007f8ecfff] Jan 20 06:54:16.038717 kernel: node 0: [mem 0x000000007fbff000-0x000000007feaefff] Jan 20 06:54:16.038727 kernel: node 0: [mem 0x000000007feb5000-0x000000007feebfff] Jan 20 06:54:16.038734 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 20 06:54:16.038742 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 20 06:54:16.038750 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 06:54:16.038760 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 06:54:16.038768 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 06:54:16.038776 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 06:54:16.038785 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 20 06:54:16.038795 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 20 06:54:16.038803 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Jan 20 06:54:16.038811 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 20 06:54:16.038819 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 20 06:54:16.038827 kernel: On node 0, zone Normal: 276 pages in unavailable ranges Jan 20 06:54:16.038836 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 06:54:16.038844 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 06:54:16.038852 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 06:54:16.038862 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 06:54:16.038870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 06:54:16.038878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 06:54:16.038886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 06:54:16.038894 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 06:54:16.038902 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 06:54:16.038910 kernel: TSC deadline timer available Jan 20 06:54:16.038920 kernel: CPU topo: Max. logical packages: 2 Jan 20 06:54:16.038928 kernel: CPU topo: Max. logical dies: 2 Jan 20 06:54:16.038936 kernel: CPU topo: Max. dies per package: 1 Jan 20 06:54:16.038943 kernel: CPU topo: Max. threads per core: 1 Jan 20 06:54:16.038951 kernel: CPU topo: Num. cores per package: 1 Jan 20 06:54:16.038959 kernel: CPU topo: Num. threads per package: 1 Jan 20 06:54:16.038968 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 20 06:54:16.038975 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 06:54:16.038985 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 06:54:16.038993 kernel: kvm-guest: setup PV sched yield Jan 20 06:54:16.039001 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 20 06:54:16.039019 kernel: Booting paravirtualized kernel on KVM Jan 20 06:54:16.039027 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 06:54:16.039036 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 20 06:54:16.039044 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 20 06:54:16.039054 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 20 06:54:16.039062 kernel: pcpu-alloc: [0] 0 1 Jan 20 06:54:16.039070 kernel: kvm-guest: PV spinlocks enabled Jan 20 06:54:16.039078 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 06:54:16.039087 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:54:16.039096 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 06:54:16.039106 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 06:54:16.039114 kernel: Fallback order for Node 0: 0 Jan 20 06:54:16.039122 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1046694 Jan 20 06:54:16.039130 kernel: Policy zone: Normal Jan 20 06:54:16.039138 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 06:54:16.039146 kernel: software IO TLB: area num 2. Jan 20 06:54:16.039154 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 06:54:16.039165 kernel: ftrace: allocating 40128 entries in 157 pages Jan 20 06:54:16.039173 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 06:54:16.039181 kernel: Dynamic Preempt: voluntary Jan 20 06:54:16.039190 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 06:54:16.039199 kernel: rcu: RCU event tracing is enabled. Jan 20 06:54:16.039207 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 06:54:16.039216 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 06:54:16.039224 kernel: Rude variant of Tasks RCU enabled. Jan 20 06:54:16.039234 kernel: Tracing variant of Tasks RCU enabled. Jan 20 06:54:16.039242 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 06:54:16.039250 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 06:54:16.039258 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:54:16.039277 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:54:16.039285 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:54:16.039293 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 20 06:54:16.039303 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 06:54:16.039311 kernel: Console: colour dummy device 80x25 Jan 20 06:54:16.039320 kernel: printk: legacy console [tty0] enabled Jan 20 06:54:16.039328 kernel: printk: legacy console [ttyS0] enabled Jan 20 06:54:16.039336 kernel: ACPI: Core revision 20240827 Jan 20 06:54:16.039344 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 06:54:16.039352 kernel: x2apic enabled Jan 20 06:54:16.039360 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 06:54:16.039370 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 06:54:16.039379 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 06:54:16.039387 kernel: kvm-guest: setup PV IPIs Jan 20 06:54:16.039395 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 20 06:54:16.039403 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 20 06:54:16.039411 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 06:54:16.039421 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 20 06:54:16.039429 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 20 06:54:16.039437 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 06:54:16.039444 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 20 06:54:16.039452 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 20 06:54:16.039460 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 20 06:54:16.039467 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 20 06:54:16.039475 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 20 06:54:16.039482 kernel: TAA: Mitigation: Clear CPU buffers Jan 20 06:54:16.039490 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 20 06:54:16.039497 kernel: active return thunk: its_return_thunk Jan 20 06:54:16.039506 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 20 06:54:16.039514 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 06:54:16.039522 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 06:54:16.039530 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 06:54:16.039538 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 20 06:54:16.039546 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 20 06:54:16.039553 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 20 06:54:16.039561 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 20 06:54:16.039568 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 06:54:16.039577 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 20 06:54:16.039585 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 20 06:54:16.039593 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 20 06:54:16.039600 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 20 06:54:16.039608 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 20 06:54:16.039616 kernel: Freeing SMP alternatives memory: 32K Jan 20 06:54:16.039623 kernel: pid_max: default: 32768 minimum: 301 Jan 20 06:54:16.039631 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 06:54:16.039638 kernel: landlock: Up and running. Jan 20 06:54:16.039646 kernel: SELinux: Initializing. Jan 20 06:54:16.039653 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 06:54:16.039661 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 06:54:16.039671 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Jan 20 06:54:16.039678 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Jan 20 06:54:16.039687 kernel: ... version: 2 Jan 20 06:54:16.039695 kernel: ... bit width: 48 Jan 20 06:54:16.039703 kernel: ... generic registers: 8 Jan 20 06:54:16.039711 kernel: ... value mask: 0000ffffffffffff Jan 20 06:54:16.039719 kernel: ... max period: 00007fffffffffff Jan 20 06:54:16.039729 kernel: ... fixed-purpose events: 3 Jan 20 06:54:16.039737 kernel: ... event mask: 00000007000000ff Jan 20 06:54:16.039745 kernel: signal: max sigframe size: 3632 Jan 20 06:54:16.039753 kernel: rcu: Hierarchical SRCU implementation. Jan 20 06:54:16.039762 kernel: rcu: Max phase no-delay instances is 400. Jan 20 06:54:16.039770 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 06:54:16.039778 kernel: smp: Bringing up secondary CPUs ... Jan 20 06:54:16.039786 kernel: smpboot: x86: Booting SMP configuration: Jan 20 06:54:16.039796 kernel: .... node #0, CPUs: #1 Jan 20 06:54:16.039805 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 06:54:16.039813 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 20 06:54:16.039821 kernel: Memory: 3969764K/4186776K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 212132K reserved, 0K cma-reserved) Jan 20 06:54:16.039830 kernel: devtmpfs: initialized Jan 20 06:54:16.039838 kernel: x86/mm: Memory block size: 128MB Jan 20 06:54:16.039846 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 06:54:16.039856 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 06:54:16.039864 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 20 06:54:16.039873 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 20 06:54:16.039881 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feb3000-0x7feb4fff] (8192 bytes) Jan 20 06:54:16.039889 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes) Jan 20 06:54:16.039897 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 06:54:16.039907 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 06:54:16.039915 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 06:54:16.039923 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 06:54:16.039932 kernel: audit: initializing netlink subsys (disabled) Jan 20 06:54:16.039940 kernel: audit: type=2000 audit(1768892052.992:1): state=initialized audit_enabled=0 res=1 Jan 20 06:54:16.039948 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 06:54:16.039956 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 06:54:16.039964 kernel: cpuidle: using governor menu Jan 20 06:54:16.039974 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 06:54:16.039982 kernel: dca service started, version 1.12.1 Jan 20 06:54:16.039990 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 20 06:54:16.039998 kernel: PCI: Using configuration type 1 for base access Jan 20 06:54:16.040007 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 06:54:16.040015 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 06:54:16.040024 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 06:54:16.040033 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 06:54:16.040041 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 06:54:16.040049 kernel: ACPI: Added _OSI(Module Device) Jan 20 06:54:16.040058 kernel: ACPI: Added _OSI(Processor Device) Jan 20 06:54:16.040065 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 06:54:16.040073 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 06:54:16.040081 kernel: ACPI: Interpreter enabled Jan 20 06:54:16.040091 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 06:54:16.040099 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 06:54:16.040107 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 06:54:16.040115 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 06:54:16.040123 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 06:54:16.040131 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 06:54:16.040315 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 06:54:16.040425 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 06:54:16.040523 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 06:54:16.040533 kernel: PCI host bridge to bus 0000:00 Jan 20 06:54:16.040632 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 06:54:16.040721 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 06:54:16.040811 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 06:54:16.040898 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 20 06:54:16.040985 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 20 06:54:16.041072 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Jan 20 06:54:16.041162 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 06:54:16.041284 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 06:54:16.041396 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 20 06:54:16.041497 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 20 06:54:16.041599 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Jan 20 06:54:16.041697 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Jan 20 06:54:16.041794 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 20 06:54:16.041894 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 06:54:16.042007 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.042105 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Jan 20 06:54:16.042203 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 20 06:54:16.042317 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 20 06:54:16.042416 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 20 06:54:16.042517 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 20 06:54:16.042622 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.042721 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Jan 20 06:54:16.042818 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 20 06:54:16.042912 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 20 06:54:16.043018 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 20 06:54:16.043127 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.043225 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Jan 20 06:54:16.043331 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 20 06:54:16.043426 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 20 06:54:16.043523 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 20 06:54:16.043626 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.043728 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Jan 20 06:54:16.043825 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 20 06:54:16.043923 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 20 06:54:16.044019 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 20 06:54:16.044123 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.044222 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Jan 20 06:54:16.044334 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 20 06:54:16.044431 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 20 06:54:16.044526 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 20 06:54:16.044628 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.044726 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Jan 20 06:54:16.044826 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 20 06:54:16.044921 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 20 06:54:16.045017 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 20 06:54:16.045118 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.045215 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Jan 20 06:54:16.045333 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 20 06:54:16.045429 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 20 06:54:16.045525 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 20 06:54:16.045626 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.045723 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Jan 20 06:54:16.045819 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 20 06:54:16.045920 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 20 06:54:16.046017 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 20 06:54:16.046120 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.046218 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Jan 20 06:54:16.046324 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 20 06:54:16.046421 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 20 06:54:16.046521 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 20 06:54:16.046626 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.046737 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Jan 20 06:54:16.046834 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 20 06:54:16.046929 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 20 06:54:16.047036 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 20 06:54:16.047141 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.047239 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Jan 20 06:54:16.047352 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 20 06:54:16.047449 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 20 06:54:16.047544 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 20 06:54:16.047644 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.047744 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Jan 20 06:54:16.047844 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 20 06:54:16.047940 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 20 06:54:16.048035 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 20 06:54:16.048148 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.048246 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Jan 20 06:54:16.048351 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 20 06:54:16.048447 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 20 06:54:16.048544 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 20 06:54:16.048648 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.048748 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Jan 20 06:54:16.048843 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 20 06:54:16.048939 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 20 06:54:16.049034 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 20 06:54:16.049133 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.049230 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Jan 20 06:54:16.049608 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 20 06:54:16.049716 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 20 06:54:16.049814 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 20 06:54:16.049916 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.050013 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Jan 20 06:54:16.050113 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 20 06:54:16.050214 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 20 06:54:16.050344 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 20 06:54:16.050451 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.050555 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Jan 20 06:54:16.050653 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 20 06:54:16.050749 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 20 06:54:16.050856 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 20 06:54:16.050962 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.051069 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Jan 20 06:54:16.051170 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 20 06:54:16.053234 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 20 06:54:16.053389 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 20 06:54:16.053499 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.053596 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Jan 20 06:54:16.053692 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 20 06:54:16.053787 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 20 06:54:16.053881 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 20 06:54:16.053982 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.054079 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Jan 20 06:54:16.054175 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 20 06:54:16.058624 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 20 06:54:16.058774 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 20 06:54:16.058879 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.058982 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Jan 20 06:54:16.059088 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 20 06:54:16.059184 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 20 06:54:16.059958 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 20 06:54:16.060087 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.060186 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Jan 20 06:54:16.060320 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 20 06:54:16.060419 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 20 06:54:16.060517 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 20 06:54:16.060619 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.060719 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Jan 20 06:54:16.060814 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 20 06:54:16.060909 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 20 06:54:16.061004 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 20 06:54:16.061106 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.061206 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Jan 20 06:54:16.061312 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 20 06:54:16.061408 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 20 06:54:16.061503 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 20 06:54:16.061604 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.061700 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Jan 20 06:54:16.061798 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 20 06:54:16.061894 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 20 06:54:16.061989 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 20 06:54:16.062089 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.062186 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Jan 20 06:54:16.069032 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 20 06:54:16.069179 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 20 06:54:16.069297 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 20 06:54:16.069409 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.069509 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Jan 20 06:54:16.069605 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 20 06:54:16.069700 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 20 06:54:16.069801 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 20 06:54:16.069905 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.070003 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Jan 20 06:54:16.070099 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 20 06:54:16.070196 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 20 06:54:16.070302 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 20 06:54:16.070409 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 06:54:16.070506 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Jan 20 06:54:16.070602 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 20 06:54:16.070697 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 20 06:54:16.070792 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 20 06:54:16.070895 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 06:54:16.070994 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 06:54:16.071141 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 06:54:16.071240 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Jan 20 06:54:16.074137 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Jan 20 06:54:16.074287 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 06:54:16.074398 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Jan 20 06:54:16.074507 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 20 06:54:16.074608 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Jan 20 06:54:16.074707 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 20 06:54:16.074805 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 20 06:54:16.074903 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 20 06:54:16.075002 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 20 06:54:16.075139 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 20 06:54:16.075245 kernel: pci_bus 0000:02: extended config space not accessible Jan 20 06:54:16.075257 kernel: acpiphp: Slot [1] registered Jan 20 06:54:16.075275 kernel: acpiphp: Slot [0] registered Jan 20 06:54:16.078044 kernel: acpiphp: Slot [2] registered Jan 20 06:54:16.078067 kernel: acpiphp: Slot [3] registered Jan 20 06:54:16.078081 kernel: acpiphp: Slot [4] registered Jan 20 06:54:16.078090 kernel: acpiphp: Slot [5] registered Jan 20 06:54:16.078098 kernel: acpiphp: Slot [6] registered Jan 20 06:54:16.078107 kernel: acpiphp: Slot [7] registered Jan 20 06:54:16.078115 kernel: acpiphp: Slot [8] registered Jan 20 06:54:16.078124 kernel: acpiphp: Slot [9] registered Jan 20 06:54:16.078133 kernel: acpiphp: Slot [10] registered Jan 20 06:54:16.078145 kernel: acpiphp: Slot [11] registered Jan 20 06:54:16.078154 kernel: acpiphp: Slot [12] registered Jan 20 06:54:16.078162 kernel: acpiphp: Slot [13] registered Jan 20 06:54:16.078171 kernel: acpiphp: Slot [14] registered Jan 20 06:54:16.078180 kernel: acpiphp: Slot [15] registered Jan 20 06:54:16.078189 kernel: acpiphp: Slot [16] registered Jan 20 06:54:16.078197 kernel: acpiphp: Slot [17] registered Jan 20 06:54:16.078208 kernel: acpiphp: Slot [18] registered Jan 20 06:54:16.078216 kernel: acpiphp: Slot [19] registered Jan 20 06:54:16.078225 kernel: acpiphp: Slot [20] registered Jan 20 06:54:16.078233 kernel: acpiphp: Slot [21] registered Jan 20 06:54:16.078242 kernel: acpiphp: Slot [22] registered Jan 20 06:54:16.078250 kernel: acpiphp: Slot [23] registered Jan 20 06:54:16.078259 kernel: acpiphp: Slot [24] registered Jan 20 06:54:16.078935 kernel: acpiphp: Slot [25] registered Jan 20 06:54:16.078951 kernel: acpiphp: Slot [26] registered Jan 20 06:54:16.078960 kernel: acpiphp: Slot [27] registered Jan 20 06:54:16.078968 kernel: acpiphp: Slot [28] registered Jan 20 06:54:16.078977 kernel: acpiphp: Slot [29] registered Jan 20 06:54:16.078985 kernel: acpiphp: Slot [30] registered Jan 20 06:54:16.078994 kernel: acpiphp: Slot [31] registered Jan 20 06:54:16.079163 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 20 06:54:16.079316 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Jan 20 06:54:16.079425 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 20 06:54:16.079438 kernel: acpiphp: Slot [0-2] registered Jan 20 06:54:16.079543 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 20 06:54:16.079645 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Jan 20 06:54:16.079747 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Jan 20 06:54:16.079850 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 20 06:54:16.079950 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 20 06:54:16.079961 kernel: acpiphp: Slot [0-3] registered Jan 20 06:54:16.080065 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 20 06:54:16.080166 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Jan 20 06:54:16.080272 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Jan 20 06:54:16.080372 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 20 06:54:16.080384 kernel: acpiphp: Slot [0-4] registered Jan 20 06:54:16.080487 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 20 06:54:16.080587 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Jan 20 06:54:16.080683 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 20 06:54:16.080695 kernel: acpiphp: Slot [0-5] registered Jan 20 06:54:16.080800 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 20 06:54:16.080899 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Jan 20 06:54:16.080998 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Jan 20 06:54:16.081094 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 20 06:54:16.081105 kernel: acpiphp: Slot [0-6] registered Jan 20 06:54:16.081202 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 20 06:54:16.081215 kernel: acpiphp: Slot [0-7] registered Jan 20 06:54:16.083403 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 20 06:54:16.083425 kernel: acpiphp: Slot [0-8] registered Jan 20 06:54:16.083533 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 20 06:54:16.083545 kernel: acpiphp: Slot [0-9] registered Jan 20 06:54:16.083643 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 20 06:54:16.083659 kernel: acpiphp: Slot [0-10] registered Jan 20 06:54:16.083756 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 20 06:54:16.083768 kernel: acpiphp: Slot [0-11] registered Jan 20 06:54:16.083864 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 20 06:54:16.083875 kernel: acpiphp: Slot [0-12] registered Jan 20 06:54:16.083972 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 20 06:54:16.083985 kernel: acpiphp: Slot [0-13] registered Jan 20 06:54:16.084081 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 20 06:54:16.084093 kernel: acpiphp: Slot [0-14] registered Jan 20 06:54:16.084190 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 20 06:54:16.084201 kernel: acpiphp: Slot [0-15] registered Jan 20 06:54:16.086076 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 20 06:54:16.086097 kernel: acpiphp: Slot [0-16] registered Jan 20 06:54:16.086210 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 20 06:54:16.086222 kernel: acpiphp: Slot [0-17] registered Jan 20 06:54:16.086338 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 20 06:54:16.086350 kernel: acpiphp: Slot [0-18] registered Jan 20 06:54:16.086448 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 20 06:54:16.086459 kernel: acpiphp: Slot [0-19] registered Jan 20 06:54:16.086558 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 20 06:54:16.086569 kernel: acpiphp: Slot [0-20] registered Jan 20 06:54:16.086665 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 20 06:54:16.086677 kernel: acpiphp: Slot [0-21] registered Jan 20 06:54:16.086772 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 20 06:54:16.086783 kernel: acpiphp: Slot [0-22] registered Jan 20 06:54:16.086878 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 20 06:54:16.086891 kernel: acpiphp: Slot [0-23] registered Jan 20 06:54:16.086989 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 20 06:54:16.087000 kernel: acpiphp: Slot [0-24] registered Jan 20 06:54:16.087130 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 20 06:54:16.087141 kernel: acpiphp: Slot [0-25] registered Jan 20 06:54:16.087237 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 20 06:54:16.087251 kernel: acpiphp: Slot [0-26] registered Jan 20 06:54:16.087410 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 20 06:54:16.087423 kernel: acpiphp: Slot [0-27] registered Jan 20 06:54:16.087520 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 20 06:54:16.087531 kernel: acpiphp: Slot [0-28] registered Jan 20 06:54:16.087628 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 20 06:54:16.087642 kernel: acpiphp: Slot [0-29] registered Jan 20 06:54:16.087738 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 20 06:54:16.087749 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 06:54:16.087759 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 06:54:16.087768 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 06:54:16.087777 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 06:54:16.087785 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 06:54:16.087796 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 06:54:16.087804 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 06:54:16.087813 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 06:54:16.087822 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 06:54:16.087830 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 06:54:16.087839 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 06:54:16.087848 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 06:54:16.087858 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 06:54:16.087866 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 06:54:16.087875 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 06:54:16.087884 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 06:54:16.087893 kernel: iommu: Default domain type: Translated Jan 20 06:54:16.087901 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 06:54:16.087910 kernel: efivars: Registered efivars operations Jan 20 06:54:16.087918 kernel: PCI: Using ACPI for IRQ routing Jan 20 06:54:16.087929 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 06:54:16.087938 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 06:54:16.087947 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 20 06:54:16.087955 kernel: e820: reserve RAM buffer [mem 0x7df57018-0x7fffffff] Jan 20 06:54:16.087963 kernel: e820: reserve RAM buffer [mem 0x7df7f018-0x7fffffff] Jan 20 06:54:16.087972 kernel: e820: reserve RAM buffer [mem 0x7e93f000-0x7fffffff] Jan 20 06:54:16.087981 kernel: e820: reserve RAM buffer [mem 0x7ec71000-0x7fffffff] Jan 20 06:54:16.087991 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 20 06:54:16.088000 kernel: e820: reserve RAM buffer [mem 0x7feaf000-0x7fffffff] Jan 20 06:54:16.088008 kernel: e820: reserve RAM buffer [mem 0x7feec000-0x7fffffff] Jan 20 06:54:16.088106 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 06:54:16.088202 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 06:54:16.088321 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 06:54:16.088335 kernel: vgaarb: loaded Jan 20 06:54:16.088344 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 06:54:16.088352 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 06:54:16.088361 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 06:54:16.088370 kernel: pnp: PnP ACPI init Jan 20 06:54:16.088483 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 20 06:54:16.088496 kernel: pnp: PnP ACPI: found 5 devices Jan 20 06:54:16.088507 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 06:54:16.088516 kernel: NET: Registered PF_INET protocol family Jan 20 06:54:16.088524 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 06:54:16.088533 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 06:54:16.088542 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 06:54:16.088550 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 06:54:16.088559 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 06:54:16.088569 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 06:54:16.088578 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 06:54:16.088586 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 06:54:16.088594 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 06:54:16.088603 kernel: NET: Registered PF_XDP protocol family Jan 20 06:54:16.088708 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 20 06:54:16.088809 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 20 06:54:16.088912 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 20 06:54:16.089012 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 20 06:54:16.089111 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 20 06:54:16.089208 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 20 06:54:16.089433 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 20 06:54:16.089567 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 20 06:54:16.089678 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 20 06:54:16.089781 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 20 06:54:16.089882 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 20 06:54:16.089983 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 20 06:54:16.090083 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 20 06:54:16.090181 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 20 06:54:16.090296 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 20 06:54:16.090397 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 20 06:54:16.090495 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 20 06:54:16.090594 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 20 06:54:16.090693 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 20 06:54:16.090792 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 20 06:54:16.090892 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 20 06:54:16.090993 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 20 06:54:16.091105 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 20 06:54:16.091206 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 20 06:54:16.091336 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 20 06:54:16.091438 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 20 06:54:16.091927 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 20 06:54:16.092042 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 20 06:54:16.095593 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 20 06:54:16.095709 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jan 20 06:54:16.095809 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jan 20 06:54:16.095907 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jan 20 06:54:16.096004 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jan 20 06:54:16.096109 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jan 20 06:54:16.096208 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Jan 20 06:54:16.096324 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Jan 20 06:54:16.096426 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Jan 20 06:54:16.096524 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Jan 20 06:54:16.096623 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Jan 20 06:54:16.096721 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Jan 20 06:54:16.096821 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Jan 20 06:54:16.096919 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Jan 20 06:54:16.097017 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.097111 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.097208 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.097337 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.097439 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.097534 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.097629 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.097724 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.097819 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.097916 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.098014 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.098112 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.098209 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.098316 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.098414 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.098510 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.098607 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.098706 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.098803 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.098897 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.098995 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.099099 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.099197 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.099327 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.099428 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.099524 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.099621 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.099717 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.099815 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.099911 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.100010 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Jan 20 06:54:16.100106 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Jan 20 06:54:16.100201 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Jan 20 06:54:16.100307 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Jan 20 06:54:16.100405 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Jan 20 06:54:16.100499 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Jan 20 06:54:16.100598 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Jan 20 06:54:16.100694 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Jan 20 06:54:16.100790 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Jan 20 06:54:16.100885 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Jan 20 06:54:16.100981 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Jan 20 06:54:16.101075 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Jan 20 06:54:16.101169 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Jan 20 06:54:16.101284 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.101384 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.101481 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.101577 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.105073 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.105193 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.105303 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.105404 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.105502 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.105597 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.105695 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.105789 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.105886 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.105983 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.106079 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.106174 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.106279 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.106375 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.106471 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.106568 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.106665 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.106760 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.106856 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.106951 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.107067 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.107163 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.107261 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.107371 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.107467 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 20 06:54:16.107563 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 20 06:54:16.107665 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 20 06:54:16.107763 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 20 06:54:16.107860 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 20 06:54:16.107960 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 20 06:54:16.108056 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 20 06:54:16.108151 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 20 06:54:16.108245 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 20 06:54:16.108365 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 20 06:54:16.108467 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Jan 20 06:54:16.108564 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 20 06:54:16.108661 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 20 06:54:16.109371 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 20 06:54:16.109498 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 20 06:54:16.109594 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 20 06:54:16.109690 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 20 06:54:16.109786 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 20 06:54:16.109882 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 20 06:54:16.109980 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 20 06:54:16.110076 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 20 06:54:16.110171 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 20 06:54:16.110276 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 20 06:54:16.110374 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 20 06:54:16.110469 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 20 06:54:16.110563 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 20 06:54:16.110662 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 20 06:54:16.110756 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 20 06:54:16.110851 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 20 06:54:16.110947 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 20 06:54:16.111051 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 20 06:54:16.111146 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 20 06:54:16.111245 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 20 06:54:16.111352 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 20 06:54:16.111447 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 20 06:54:16.111546 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 20 06:54:16.111640 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 20 06:54:16.111734 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 20 06:54:16.111830 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 20 06:54:16.111928 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 20 06:54:16.112023 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 20 06:54:16.112120 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 20 06:54:16.112214 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 20 06:54:16.112319 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 20 06:54:16.112416 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 20 06:54:16.112510 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 20 06:54:16.112606 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 20 06:54:16.112705 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 20 06:54:16.112799 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 20 06:54:16.112894 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 20 06:54:16.112990 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 20 06:54:16.113083 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 20 06:54:16.113179 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 20 06:54:16.113309 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 20 06:54:16.113410 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 20 06:54:16.113504 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 20 06:54:16.113601 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 20 06:54:16.113699 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Jan 20 06:54:16.113795 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 20 06:54:16.113889 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 20 06:54:16.113989 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 20 06:54:16.114083 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Jan 20 06:54:16.114179 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 20 06:54:16.114282 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 20 06:54:16.114381 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 20 06:54:16.114475 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Jan 20 06:54:16.114573 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 20 06:54:16.114668 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 20 06:54:16.114764 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 20 06:54:16.114859 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Jan 20 06:54:16.114955 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 20 06:54:16.115088 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 20 06:54:16.115187 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 20 06:54:16.115289 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Jan 20 06:54:16.115385 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 20 06:54:16.115481 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 20 06:54:16.115579 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 20 06:54:16.115674 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Jan 20 06:54:16.115772 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 20 06:54:16.115866 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 20 06:54:16.115963 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 20 06:54:16.116058 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Jan 20 06:54:16.116155 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 20 06:54:16.116252 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 20 06:54:16.116365 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 20 06:54:16.116463 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Jan 20 06:54:16.116565 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 20 06:54:16.116662 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 20 06:54:16.116760 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 20 06:54:16.116855 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Jan 20 06:54:16.116951 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 20 06:54:16.117050 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 20 06:54:16.117147 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 20 06:54:16.117242 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Jan 20 06:54:16.117346 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 20 06:54:16.117442 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 20 06:54:16.117543 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 20 06:54:16.117641 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Jan 20 06:54:16.117737 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 20 06:54:16.117833 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 20 06:54:16.117931 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 20 06:54:16.118028 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Jan 20 06:54:16.118123 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 20 06:54:16.118221 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 20 06:54:16.118330 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 20 06:54:16.118428 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Jan 20 06:54:16.118524 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 20 06:54:16.118620 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 20 06:54:16.118718 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 06:54:16.118813 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 06:54:16.118902 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 06:54:16.118990 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 20 06:54:16.119088 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 20 06:54:16.119177 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Jan 20 06:54:16.119291 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Jan 20 06:54:16.119391 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Jan 20 06:54:16.119481 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 20 06:54:16.119577 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Jan 20 06:54:16.119673 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Jan 20 06:54:16.120028 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 20 06:54:16.120134 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Jan 20 06:54:16.120230 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 20 06:54:16.120342 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Jan 20 06:54:16.120437 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 20 06:54:16.120535 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Jan 20 06:54:16.120625 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 20 06:54:16.120724 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Jan 20 06:54:16.120814 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 20 06:54:16.120911 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Jan 20 06:54:16.121002 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 20 06:54:16.121099 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Jan 20 06:54:16.121192 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 20 06:54:16.121300 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Jan 20 06:54:16.121391 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 20 06:54:16.121488 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Jan 20 06:54:16.121895 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 20 06:54:16.122003 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Jan 20 06:54:16.122095 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 20 06:54:16.122192 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Jan 20 06:54:16.122294 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 20 06:54:16.122392 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Jan 20 06:54:16.122486 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 20 06:54:16.122582 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Jan 20 06:54:16.122673 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 20 06:54:16.122768 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Jan 20 06:54:16.122858 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 20 06:54:16.122956 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Jan 20 06:54:16.123056 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 20 06:54:16.123153 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Jan 20 06:54:16.123245 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 20 06:54:16.126656 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Jan 20 06:54:16.126774 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Jan 20 06:54:16.126864 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 20 06:54:16.126960 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Jan 20 06:54:16.127085 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Jan 20 06:54:16.127175 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 20 06:54:16.127284 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Jan 20 06:54:16.127378 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Jan 20 06:54:16.127467 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 20 06:54:16.127565 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Jan 20 06:54:16.127654 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Jan 20 06:54:16.127744 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 20 06:54:16.127842 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Jan 20 06:54:16.127932 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Jan 20 06:54:16.128021 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 20 06:54:16.128120 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Jan 20 06:54:16.128211 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Jan 20 06:54:16.128347 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 20 06:54:16.128446 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Jan 20 06:54:16.128537 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Jan 20 06:54:16.128626 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 20 06:54:16.128725 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Jan 20 06:54:16.128815 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Jan 20 06:54:16.128904 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 20 06:54:16.129001 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Jan 20 06:54:16.129091 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Jan 20 06:54:16.129180 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 20 06:54:16.129282 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Jan 20 06:54:16.129374 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Jan 20 06:54:16.129467 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 20 06:54:16.129562 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Jan 20 06:54:16.129652 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Jan 20 06:54:16.129741 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 20 06:54:16.129838 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Jan 20 06:54:16.129928 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Jan 20 06:54:16.130020 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 20 06:54:16.130118 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Jan 20 06:54:16.130207 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Jan 20 06:54:16.130308 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 20 06:54:16.130321 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 06:54:16.130330 kernel: PCI: CLS 0 bytes, default 64 Jan 20 06:54:16.130341 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 20 06:54:16.130350 kernel: software IO TLB: mapped [mem 0x0000000077ede000-0x000000007bede000] (64MB) Jan 20 06:54:16.130359 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 20 06:54:16.130368 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 20 06:54:16.130377 kernel: Initialise system trusted keyrings Jan 20 06:54:16.130387 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 06:54:16.130395 kernel: Key type asymmetric registered Jan 20 06:54:16.130406 kernel: Asymmetric key parser 'x509' registered Jan 20 06:54:16.130414 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 06:54:16.130422 kernel: io scheduler mq-deadline registered Jan 20 06:54:16.130431 kernel: io scheduler kyber registered Jan 20 06:54:16.130439 kernel: io scheduler bfq registered Jan 20 06:54:16.130545 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 20 06:54:16.130646 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 20 06:54:16.130749 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 20 06:54:16.130848 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 20 06:54:16.130945 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 20 06:54:16.131051 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 20 06:54:16.131150 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 20 06:54:16.131251 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 20 06:54:16.131359 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 20 06:54:16.131455 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 20 06:54:16.131555 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 20 06:54:16.131651 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 20 06:54:16.131750 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 20 06:54:16.131847 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 20 06:54:16.131944 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 20 06:54:16.132040 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 20 06:54:16.132053 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 06:54:16.132151 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 20 06:54:16.132248 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 20 06:54:16.132356 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Jan 20 06:54:16.132454 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Jan 20 06:54:16.132552 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Jan 20 06:54:16.133278 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Jan 20 06:54:16.133396 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Jan 20 06:54:16.133495 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Jan 20 06:54:16.133593 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Jan 20 06:54:16.133694 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Jan 20 06:54:16.133791 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Jan 20 06:54:16.133887 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Jan 20 06:54:16.133984 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Jan 20 06:54:16.134082 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Jan 20 06:54:16.134179 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Jan 20 06:54:16.134726 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Jan 20 06:54:16.134743 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 06:54:16.134850 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Jan 20 06:54:16.134948 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Jan 20 06:54:16.135057 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Jan 20 06:54:16.135153 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Jan 20 06:54:16.135251 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Jan 20 06:54:16.135375 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Jan 20 06:54:16.135474 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Jan 20 06:54:16.135571 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Jan 20 06:54:16.135668 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Jan 20 06:54:16.135763 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Jan 20 06:54:16.135861 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Jan 20 06:54:16.135961 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Jan 20 06:54:16.136060 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Jan 20 06:54:16.136160 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Jan 20 06:54:16.136260 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Jan 20 06:54:16.136369 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Jan 20 06:54:16.136380 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 20 06:54:16.136506 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Jan 20 06:54:16.136606 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Jan 20 06:54:16.136704 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Jan 20 06:54:16.136803 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Jan 20 06:54:16.136903 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Jan 20 06:54:16.137002 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Jan 20 06:54:16.137100 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Jan 20 06:54:16.137200 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Jan 20 06:54:16.137308 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Jan 20 06:54:16.137405 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Jan 20 06:54:16.137416 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 06:54:16.137424 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 06:54:16.137434 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 06:54:16.137443 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 06:54:16.137454 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 06:54:16.137463 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 06:54:16.137569 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 20 06:54:16.137664 kernel: rtc_cmos 00:03: registered as rtc0 Jan 20 06:54:16.137675 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 20 06:54:16.137766 kernel: rtc_cmos 00:03: setting system clock to 2026-01-20T06:54:14 UTC (1768892054) Jan 20 06:54:16.137864 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 20 06:54:16.137875 kernel: intel_pstate: CPU model not supported Jan 20 06:54:16.137884 kernel: efifb: probing for efifb Jan 20 06:54:16.137892 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 20 06:54:16.137901 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 20 06:54:16.137910 kernel: efifb: scrolling: redraw Jan 20 06:54:16.137918 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 06:54:16.137927 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 06:54:16.137937 kernel: fb0: EFI VGA frame buffer device Jan 20 06:54:16.137946 kernel: pstore: Using crash dump compression: deflate Jan 20 06:54:16.137955 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 06:54:16.137964 kernel: NET: Registered PF_INET6 protocol family Jan 20 06:54:16.137972 kernel: Segment Routing with IPv6 Jan 20 06:54:16.137981 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 06:54:16.137989 kernel: NET: Registered PF_PACKET protocol family Jan 20 06:54:16.138000 kernel: Key type dns_resolver registered Jan 20 06:54:16.138009 kernel: IPI shorthand broadcast: enabled Jan 20 06:54:16.138018 kernel: sched_clock: Marking stable (2548001635, 157982835)->(2808968242, -102983772) Jan 20 06:54:16.138027 kernel: registered taskstats version 1 Jan 20 06:54:16.138035 kernel: Loading compiled-in X.509 certificates Jan 20 06:54:16.138044 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3e9049adf8f1d71dd06c731465288f6e1d353052' Jan 20 06:54:16.138053 kernel: Demotion targets for Node 0: null Jan 20 06:54:16.138063 kernel: Key type .fscrypt registered Jan 20 06:54:16.138072 kernel: Key type fscrypt-provisioning registered Jan 20 06:54:16.138080 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 06:54:16.138089 kernel: ima: Allocated hash algorithm: sha1 Jan 20 06:54:16.138097 kernel: ima: No architecture policies found Jan 20 06:54:16.138106 kernel: clk: Disabling unused clocks Jan 20 06:54:16.138114 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 20 06:54:16.138125 kernel: Write protecting the kernel read-only data: 47104k Jan 20 06:54:16.138134 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 20 06:54:16.138143 kernel: Run /init as init process Jan 20 06:54:16.138156 kernel: with arguments: Jan 20 06:54:16.138165 kernel: /init Jan 20 06:54:16.138174 kernel: with environment: Jan 20 06:54:16.138182 kernel: HOME=/ Jan 20 06:54:16.138191 kernel: TERM=linux Jan 20 06:54:16.138201 kernel: SCSI subsystem initialized Jan 20 06:54:16.138210 kernel: libata version 3.00 loaded. Jan 20 06:54:16.138336 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 06:54:16.138348 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 06:54:16.138446 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 06:54:16.138547 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 06:54:16.138648 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 06:54:16.138765 kernel: scsi host0: ahci Jan 20 06:54:16.138872 kernel: scsi host1: ahci Jan 20 06:54:16.138995 kernel: scsi host2: ahci Jan 20 06:54:16.139108 kernel: scsi host3: ahci Jan 20 06:54:16.139209 kernel: scsi host4: ahci Jan 20 06:54:16.142841 kernel: scsi host5: ahci Jan 20 06:54:16.142867 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 55 lpm-pol 1 Jan 20 06:54:16.142877 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 55 lpm-pol 1 Jan 20 06:54:16.142886 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 55 lpm-pol 1 Jan 20 06:54:16.142895 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 55 lpm-pol 1 Jan 20 06:54:16.142904 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 55 lpm-pol 1 Jan 20 06:54:16.142919 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 55 lpm-pol 1 Jan 20 06:54:16.142929 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 06:54:16.142938 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 06:54:16.142946 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 06:54:16.142955 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 06:54:16.142964 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 20 06:54:16.142973 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 06:54:16.142984 kernel: ACPI: bus type USB registered Jan 20 06:54:16.142993 kernel: usbcore: registered new interface driver usbfs Jan 20 06:54:16.143002 kernel: usbcore: registered new interface driver hub Jan 20 06:54:16.143048 kernel: usbcore: registered new device driver usb Jan 20 06:54:16.143163 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Jan 20 06:54:16.143280 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Jan 20 06:54:16.143384 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Jan 20 06:54:16.143488 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Jan 20 06:54:16.143614 kernel: hub 1-0:1.0: USB hub found Jan 20 06:54:16.143727 kernel: hub 1-0:1.0: 2 ports detected Jan 20 06:54:16.143836 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 20 06:54:16.143935 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 20 06:54:16.143949 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 06:54:16.143958 kernel: GPT:25804799 != 104857599 Jan 20 06:54:16.143968 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 06:54:16.143976 kernel: GPT:25804799 != 104857599 Jan 20 06:54:16.143985 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 06:54:16.143993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 06:54:16.144003 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 06:54:16.144014 kernel: device-mapper: uevent: version 1.0.3 Jan 20 06:54:16.144022 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 06:54:16.144032 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 06:54:16.144041 kernel: raid6: avx512x4 gen() 42615 MB/s Jan 20 06:54:16.144049 kernel: raid6: avx512x2 gen() 45024 MB/s Jan 20 06:54:16.144058 kernel: raid6: avx512x1 gen() 44280 MB/s Jan 20 06:54:16.144067 kernel: raid6: avx2x4 gen() 34259 MB/s Jan 20 06:54:16.144077 kernel: raid6: avx2x2 gen() 33534 MB/s Jan 20 06:54:16.144086 kernel: raid6: avx2x1 gen() 30194 MB/s Jan 20 06:54:16.144095 kernel: raid6: using algorithm avx512x2 gen() 45024 MB/s Jan 20 06:54:16.144104 kernel: raid6: .... xor() 26778 MB/s, rmw enabled Jan 20 06:54:16.144114 kernel: raid6: using avx512x2 recovery algorithm Jan 20 06:54:16.144123 kernel: xor: automatically using best checksumming function avx Jan 20 06:54:16.144132 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 06:54:16.144143 kernel: BTRFS: device fsid 98f50efd-4872-4dd8-af35-5e494490b9aa devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (203) Jan 20 06:54:16.144152 kernel: BTRFS info (device dm-0): first mount of filesystem 98f50efd-4872-4dd8-af35-5e494490b9aa Jan 20 06:54:16.144161 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:54:16.144299 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Jan 20 06:54:16.144313 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 06:54:16.144322 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 06:54:16.144334 kernel: loop: module loaded Jan 20 06:54:16.144343 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 06:54:16.144352 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 06:54:16.144362 systemd[1]: Successfully made /usr/ read-only. Jan 20 06:54:16.144375 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 06:54:16.144385 systemd[1]: Detected virtualization kvm. Jan 20 06:54:16.144395 systemd[1]: Detected architecture x86-64. Jan 20 06:54:16.144405 systemd[1]: Running in initrd. Jan 20 06:54:16.144413 systemd[1]: No hostname configured, using default hostname. Jan 20 06:54:16.144423 systemd[1]: Hostname set to . Jan 20 06:54:16.144432 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 06:54:16.144441 systemd[1]: Queued start job for default target initrd.target. Jan 20 06:54:16.144452 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 06:54:16.144462 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:54:16.144472 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:54:16.144482 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 06:54:16.144491 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 06:54:16.144501 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 06:54:16.144512 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 06:54:16.144521 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:54:16.144531 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:54:16.144540 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 06:54:16.144549 systemd[1]: Reached target paths.target - Path Units. Jan 20 06:54:16.144558 systemd[1]: Reached target slices.target - Slice Units. Jan 20 06:54:16.144569 systemd[1]: Reached target swap.target - Swaps. Jan 20 06:54:16.144578 systemd[1]: Reached target timers.target - Timer Units. Jan 20 06:54:16.144587 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 06:54:16.144596 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 06:54:16.144605 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:54:16.144615 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 06:54:16.144624 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 06:54:16.144635 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:54:16.144644 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 06:54:16.144653 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:54:16.144662 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 06:54:16.144672 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 06:54:16.144681 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 06:54:16.144690 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 06:54:16.144701 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 06:54:16.144711 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 06:54:16.144720 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 06:54:16.144729 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 06:54:16.144738 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 06:54:16.144750 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:54:16.144759 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 06:54:16.144768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:54:16.144777 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 06:54:16.144787 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 06:54:16.144822 systemd-journald[340]: Collecting audit messages is enabled. Jan 20 06:54:16.144846 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 06:54:16.144855 kernel: audit: type=1130 audit(1768892056.054:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.144867 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 06:54:16.144876 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 06:54:16.144886 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:54:16.144895 kernel: audit: type=1130 audit(1768892056.077:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.144904 kernel: Bridge firewalling registered Jan 20 06:54:16.144913 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 06:54:16.144923 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 06:54:16.144934 kernel: audit: type=1130 audit(1768892056.091:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.144943 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:54:16.144952 kernel: audit: type=1130 audit(1768892056.101:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.144962 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 06:54:16.144971 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 06:54:16.144980 kernel: audit: type=1130 audit(1768892056.123:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.144991 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 06:54:16.145000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:54:16.145010 kernel: audit: type=1130 audit(1768892056.138:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.145020 systemd-journald[340]: Journal started Jan 20 06:54:16.145040 systemd-journald[340]: Runtime Journal (/run/log/journal/cff2c703321b425aafeba8d1e85865ed) is 8M, max 77.9M, 69.9M free. Jan 20 06:54:16.148306 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 06:54:16.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.082810 systemd-modules-load[342]: Inserted module 'br_netfilter' Jan 20 06:54:16.151310 kernel: audit: type=1334 audit(1768892056.145:8): prog-id=6 op=LOAD Jan 20 06:54:16.145000 audit: BPF prog-id=6 op=LOAD Jan 20 06:54:16.154295 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 06:54:16.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.160630 kernel: audit: type=1130 audit(1768892056.155:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.163193 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 06:54:16.168903 dracut-cmdline[366]: dracut-109 Jan 20 06:54:16.171317 dracut-cmdline[366]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:54:16.187238 systemd-tmpfiles[386]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 06:54:16.199758 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:54:16.207972 kernel: audit: type=1130 audit(1768892056.200:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.220248 systemd-resolved[368]: Positive Trust Anchors: Jan 20 06:54:16.221097 systemd-resolved[368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 06:54:16.221104 systemd-resolved[368]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 06:54:16.221137 systemd-resolved[368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 06:54:16.264370 systemd-resolved[368]: Defaulting to hostname 'linux'. Jan 20 06:54:16.267635 kernel: Loading iSCSI transport class v2.0-870. Jan 20 06:54:16.266288 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 06:54:16.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.268326 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:54:16.285452 kernel: iscsi: registered transport (tcp) Jan 20 06:54:16.311522 kernel: iscsi: registered transport (qla4xxx) Jan 20 06:54:16.311593 kernel: QLogic iSCSI HBA Driver Jan 20 06:54:16.337894 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 06:54:16.355045 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:54:16.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.357493 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 06:54:16.397508 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 06:54:16.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.399540 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 06:54:16.400650 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 06:54:16.434420 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 06:54:16.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.435000 audit: BPF prog-id=7 op=LOAD Jan 20 06:54:16.435000 audit: BPF prog-id=8 op=LOAD Jan 20 06:54:16.437579 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:54:16.463889 systemd-udevd[610]: Using default interface naming scheme 'v257'. Jan 20 06:54:16.472910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:54:16.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.474490 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 06:54:16.501496 dracut-pre-trigger[673]: rd.md=0: removing MD RAID activation Jan 20 06:54:16.505979 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 06:54:16.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.507000 audit: BPF prog-id=9 op=LOAD Jan 20 06:54:16.509622 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 06:54:16.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.529370 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 06:54:16.532404 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 06:54:16.555657 systemd-networkd[726]: lo: Link UP Jan 20 06:54:16.555664 systemd-networkd[726]: lo: Gained carrier Jan 20 06:54:16.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.556759 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 06:54:16.557419 systemd[1]: Reached target network.target - Network. Jan 20 06:54:16.619575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:54:16.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.621531 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 06:54:16.725027 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 06:54:16.755941 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 06:54:16.766697 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 06:54:16.771844 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 06:54:16.775289 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 06:54:16.785162 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 06:54:16.789803 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 06:54:16.790043 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 06:54:16.801293 kernel: usbcore: registered new interface driver usbhid Jan 20 06:54:16.801354 kernel: usbhid: USB HID core driver Jan 20 06:54:16.804893 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:54:16.806506 systemd-networkd[726]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 06:54:16.809506 systemd-networkd[726]: eth0: Link UP Jan 20 06:54:16.816582 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input4 Jan 20 06:54:16.816612 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Jan 20 06:54:16.809653 systemd-networkd[726]: eth0: Gained carrier Jan 20 06:54:16.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.809666 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:54:16.811966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:54:16.812068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:54:16.817428 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:54:16.822295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:54:16.825315 disk-uuid[791]: Primary Header is updated. Jan 20 06:54:16.825315 disk-uuid[791]: Secondary Entries is updated. Jan 20 06:54:16.825315 disk-uuid[791]: Secondary Header is updated. Jan 20 06:54:16.832860 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:54:16.832944 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:54:16.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.837463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:54:16.863317 kernel: AES CTR mode by8 optimization enabled Jan 20 06:54:16.866750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:54:16.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.914336 systemd-networkd[726]: eth0: DHCPv4 address 10.0.0.92/25, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 06:54:16.966558 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 06:54:16.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:16.967928 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 06:54:16.968461 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:54:16.969322 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 06:54:16.971237 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 06:54:16.995865 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 06:54:16.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:17.903710 disk-uuid[793]: Warning: The kernel is still using the old partition table. Jan 20 06:54:17.903710 disk-uuid[793]: The new table will be used at the next reboot or after you Jan 20 06:54:17.903710 disk-uuid[793]: run partprobe(8) or kpartx(8) Jan 20 06:54:17.903710 disk-uuid[793]: The operation has completed successfully. Jan 20 06:54:17.912933 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 06:54:17.921617 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 20 06:54:17.921644 kernel: audit: type=1130 audit(1768892057.913:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:17.921665 kernel: audit: type=1131 audit(1768892057.913:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:17.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:17.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:17.913052 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 06:54:17.916253 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 06:54:17.973300 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (916) Jan 20 06:54:17.976688 kernel: BTRFS info (device vda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:54:17.976744 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:54:17.985770 kernel: BTRFS info (device vda6): turning on async discard Jan 20 06:54:17.985817 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 06:54:17.995289 kernel: BTRFS info (device vda6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:54:17.995930 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 06:54:18.005853 kernel: audit: type=1130 audit(1768892057.996:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:17.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:17.999942 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 06:54:18.230347 ignition[935]: Ignition 2.24.0 Jan 20 06:54:18.230359 ignition[935]: Stage: fetch-offline Jan 20 06:54:18.230406 ignition[935]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:54:18.232300 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 06:54:18.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:18.230417 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 06:54:18.237756 kernel: audit: type=1130 audit(1768892058.232:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:18.235436 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 06:54:18.230514 ignition[935]: parsed url from cmdline: "" Jan 20 06:54:18.230524 ignition[935]: no config URL provided Jan 20 06:54:18.230530 ignition[935]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 06:54:18.230538 ignition[935]: no config at "/usr/lib/ignition/user.ign" Jan 20 06:54:18.230543 ignition[935]: failed to fetch config: resource requires networking Jan 20 06:54:18.230711 ignition[935]: Ignition finished successfully Jan 20 06:54:18.258103 ignition[942]: Ignition 2.24.0 Jan 20 06:54:18.258119 ignition[942]: Stage: fetch Jan 20 06:54:18.258304 ignition[942]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:54:18.258312 ignition[942]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 06:54:18.258397 ignition[942]: parsed url from cmdline: "" Jan 20 06:54:18.258401 ignition[942]: no config URL provided Jan 20 06:54:18.258405 ignition[942]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 06:54:18.258411 ignition[942]: no config at "/usr/lib/ignition/user.ign" Jan 20 06:54:18.258524 ignition[942]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 20 06:54:18.258545 ignition[942]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 20 06:54:18.258569 ignition[942]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 20 06:54:18.843424 systemd-networkd[726]: eth0: Gained IPv6LL Jan 20 06:54:18.986650 ignition[942]: GET result: OK Jan 20 06:54:18.986844 ignition[942]: parsing config with SHA512: da9bb4b76ab6d7d7d4920cbbe143ecd4fd1be4891fc916f3cac6d7207533ab2737ae5dd7aef1bd08e93679f9aeac0800a2a90236954cfa9086033118416a9bb9 Jan 20 06:54:18.994854 unknown[942]: fetched base config from "system" Jan 20 06:54:18.994863 unknown[942]: fetched base config from "system" Jan 20 06:54:18.995191 ignition[942]: fetch: fetch complete Jan 20 06:54:18.994868 unknown[942]: fetched user config from "openstack" Jan 20 06:54:18.995195 ignition[942]: fetch: fetch passed Jan 20 06:54:19.001800 kernel: audit: type=1130 audit(1768892058.997:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:18.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:18.996936 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 06:54:18.995238 ignition[942]: Ignition finished successfully Jan 20 06:54:18.999464 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 06:54:19.021461 ignition[949]: Ignition 2.24.0 Jan 20 06:54:19.022125 ignition[949]: Stage: kargs Jan 20 06:54:19.022319 ignition[949]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:54:19.022328 ignition[949]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 06:54:19.023118 ignition[949]: kargs: kargs passed Jan 20 06:54:19.023158 ignition[949]: Ignition finished successfully Jan 20 06:54:19.026655 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 06:54:19.028433 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 06:54:19.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:19.034333 kernel: audit: type=1130 audit(1768892059.027:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:19.064169 ignition[955]: Ignition 2.24.0 Jan 20 06:54:19.064180 ignition[955]: Stage: disks Jan 20 06:54:19.064975 ignition[955]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:54:19.064984 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 06:54:19.065616 ignition[955]: disks: disks passed Jan 20 06:54:19.067150 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 06:54:19.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:19.065654 ignition[955]: Ignition finished successfully Jan 20 06:54:19.071714 kernel: audit: type=1130 audit(1768892059.067:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:19.068137 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 06:54:19.071363 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 06:54:19.072031 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 06:54:19.072751 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 06:54:19.073437 systemd[1]: Reached target basic.target - Basic System. Jan 20 06:54:19.075035 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 06:54:19.137865 systemd-fsck[963]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 20 06:54:19.140190 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 06:54:19.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:19.143355 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 06:54:19.145289 kernel: audit: type=1130 audit(1768892059.140:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:19.314313 kernel: EXT4-fs (vda9): mounted filesystem cccfbfd8-bb77-4a2f-9af9-c87f4957b904 r/w with ordered data mode. Quota mode: none. Jan 20 06:54:19.315461 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 06:54:19.317008 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 06:54:19.321604 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 06:54:19.326224 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 06:54:19.327124 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 06:54:19.332878 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 20 06:54:19.334170 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 06:54:19.334212 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 06:54:19.337398 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 06:54:19.340286 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 06:54:19.360315 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (971) Jan 20 06:54:19.370458 kernel: BTRFS info (device vda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:54:19.370522 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:54:19.381532 kernel: BTRFS info (device vda6): turning on async discard Jan 20 06:54:19.381598 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 06:54:19.384455 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 06:54:19.442292 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:19.580965 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 06:54:19.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:19.584350 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 06:54:19.586285 kernel: audit: type=1130 audit(1768892059.581:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:19.587213 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 06:54:19.604576 kernel: BTRFS info (device vda6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:54:19.603231 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 06:54:19.626727 ignition[1071]: INFO : Ignition 2.24.0 Jan 20 06:54:19.626727 ignition[1071]: INFO : Stage: mount Jan 20 06:54:19.627988 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:54:19.627988 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 06:54:19.627988 ignition[1071]: INFO : mount: mount passed Jan 20 06:54:19.627988 ignition[1071]: INFO : Ignition finished successfully Jan 20 06:54:19.630493 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 06:54:19.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:19.635072 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 06:54:19.635532 kernel: audit: type=1130 audit(1768892059.631:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:19.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:20.497305 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:22.508286 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:26.516371 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:26.520386 coreos-metadata[973]: Jan 20 06:54:26.520 WARN failed to locate config-drive, using the metadata service API instead Jan 20 06:54:26.534633 coreos-metadata[973]: Jan 20 06:54:26.534 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 20 06:54:26.694285 coreos-metadata[973]: Jan 20 06:54:26.694 INFO Fetch successful Jan 20 06:54:26.695052 coreos-metadata[973]: Jan 20 06:54:26.695 INFO wrote hostname ci-4585-0-0-n-f719bce5cf to /sysroot/etc/hostname Jan 20 06:54:26.695902 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 20 06:54:26.706063 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:54:26.706090 kernel: audit: type=1130 audit(1768892066.695:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:26.706106 kernel: audit: type=1131 audit(1768892066.695:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:26.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:26.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:26.696020 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 20 06:54:26.697930 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 06:54:26.723099 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 06:54:26.756331 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1088) Jan 20 06:54:26.762125 kernel: BTRFS info (device vda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:54:26.762185 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:54:26.770643 kernel: BTRFS info (device vda6): turning on async discard Jan 20 06:54:26.770711 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 06:54:26.772857 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 06:54:26.799941 ignition[1106]: INFO : Ignition 2.24.0 Jan 20 06:54:26.800754 ignition[1106]: INFO : Stage: files Jan 20 06:54:26.800754 ignition[1106]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:54:26.800754 ignition[1106]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 06:54:26.802420 ignition[1106]: DEBUG : files: compiled without relabeling support, skipping Jan 20 06:54:26.803371 ignition[1106]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 06:54:26.803371 ignition[1106]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 06:54:26.812237 ignition[1106]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 06:54:26.813064 ignition[1106]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 06:54:26.813533 ignition[1106]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 06:54:26.813115 unknown[1106]: wrote ssh authorized keys file for user: core Jan 20 06:54:26.819192 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 06:54:26.820106 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 06:54:26.880359 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 06:54:26.996009 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 06:54:26.997261 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 06:54:26.997261 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 06:54:26.997261 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 06:54:26.997261 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 06:54:26.997261 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 06:54:26.997261 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 06:54:26.997261 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 06:54:26.997261 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 06:54:27.000641 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 06:54:27.000641 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 06:54:27.000641 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:54:27.002210 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:54:27.002210 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:54:27.002210 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 06:54:27.451477 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 06:54:28.042045 ignition[1106]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:54:28.043418 ignition[1106]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 06:54:28.047499 ignition[1106]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 06:54:28.051287 ignition[1106]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 06:54:28.051287 ignition[1106]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 06:54:28.051287 ignition[1106]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 06:54:28.051287 ignition[1106]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 06:54:28.051287 ignition[1106]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 06:54:28.060366 kernel: audit: type=1130 audit(1768892068.054:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.060432 ignition[1106]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 06:54:28.060432 ignition[1106]: INFO : files: files passed Jan 20 06:54:28.060432 ignition[1106]: INFO : Ignition finished successfully Jan 20 06:54:28.053403 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 06:54:28.057450 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 06:54:28.060932 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 06:54:28.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.071324 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 06:54:28.079438 kernel: audit: type=1130 audit(1768892068.071:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.079467 kernel: audit: type=1131 audit(1768892068.071:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.071424 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 06:54:28.084992 initrd-setup-root-after-ignition[1137]: grep: Jan 20 06:54:28.085964 initrd-setup-root-after-ignition[1141]: grep: Jan 20 06:54:28.086750 initrd-setup-root-after-ignition[1137]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:54:28.086750 initrd-setup-root-after-ignition[1137]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:54:28.087743 initrd-setup-root-after-ignition[1141]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:54:28.089036 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 06:54:28.090149 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 06:54:28.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.095461 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 06:54:28.096568 kernel: audit: type=1130 audit(1768892068.089:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.136440 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 06:54:28.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.137311 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 06:54:28.145607 kernel: audit: type=1130 audit(1768892068.137:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.145637 kernel: audit: type=1131 audit(1768892068.137:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.138160 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 06:54:28.146784 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 06:54:28.147984 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 06:54:28.149525 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 06:54:28.175931 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 06:54:28.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.179389 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 06:54:28.181681 kernel: audit: type=1130 audit(1768892068.176:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.201058 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 06:54:28.201876 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:54:28.203127 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:54:28.204184 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 06:54:28.205227 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 06:54:28.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.205356 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 06:54:28.210430 kernel: audit: type=1131 audit(1768892068.206:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.210602 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 06:54:28.211149 systemd[1]: Stopped target basic.target - Basic System. Jan 20 06:54:28.211664 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 06:54:28.212138 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 06:54:28.214382 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 06:54:28.215312 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 06:54:28.216237 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 06:54:28.217160 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 06:54:28.218083 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 06:54:28.219007 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 06:54:28.219917 systemd[1]: Stopped target swap.target - Swaps. Jan 20 06:54:28.220803 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 06:54:28.221323 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 06:54:28.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.222321 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:54:28.223251 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:54:28.224097 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 06:54:28.224586 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:54:28.225431 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 06:54:28.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.225537 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 06:54:28.226559 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 06:54:28.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.226649 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 06:54:28.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.227422 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 06:54:28.227505 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 06:54:28.230430 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 06:54:28.230852 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 06:54:28.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.230978 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:54:28.234472 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 06:54:28.234915 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 06:54:28.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.235121 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:54:28.236452 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 06:54:28.236547 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:54:28.237024 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 06:54:28.237108 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 06:54:28.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.245611 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 06:54:28.245694 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 06:54:28.254759 ignition[1161]: INFO : Ignition 2.24.0 Jan 20 06:54:28.254759 ignition[1161]: INFO : Stage: umount Jan 20 06:54:28.254759 ignition[1161]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:54:28.254759 ignition[1161]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 06:54:28.256676 ignition[1161]: INFO : umount: umount passed Jan 20 06:54:28.256676 ignition[1161]: INFO : Ignition finished successfully Jan 20 06:54:28.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.256248 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 06:54:28.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.256363 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 06:54:28.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.257451 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 06:54:28.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.257538 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 06:54:28.258248 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 06:54:28.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.258333 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 06:54:28.259712 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 06:54:28.259754 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 06:54:28.260544 systemd[1]: Stopped target network.target - Network. Jan 20 06:54:28.261968 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 06:54:28.262012 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 06:54:28.262690 systemd[1]: Stopped target paths.target - Path Units. Jan 20 06:54:28.263529 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 06:54:28.267334 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:54:28.267887 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 06:54:28.268582 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 06:54:28.269640 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 06:54:28.269686 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 06:54:28.271688 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 06:54:28.271928 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 06:54:28.272350 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 06:54:28.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.272374 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:54:28.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.273087 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 06:54:28.273144 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 06:54:28.273755 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 06:54:28.273792 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 06:54:28.274518 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 06:54:28.275257 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 06:54:28.277828 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 06:54:28.278369 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 06:54:28.278499 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 06:54:28.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.279647 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 06:54:28.279703 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 06:54:28.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.285110 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 06:54:28.285211 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 06:54:28.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.287012 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 06:54:28.287000 audit: BPF prog-id=6 op=UNLOAD Jan 20 06:54:28.287106 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 06:54:28.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.289000 audit: BPF prog-id=9 op=UNLOAD Jan 20 06:54:28.289418 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 06:54:28.289929 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 06:54:28.289967 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:54:28.291381 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 06:54:28.291750 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 06:54:28.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.291797 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 06:54:28.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.293416 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 06:54:28.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.293453 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:54:28.294063 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 06:54:28.294097 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 06:54:28.295368 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:54:28.301215 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 06:54:28.301709 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:54:28.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.302983 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 06:54:28.303031 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 06:54:28.304173 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 06:54:28.304204 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:54:28.305361 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 06:54:28.305401 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 06:54:28.306559 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 06:54:28.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.306597 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 06:54:28.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.307764 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 06:54:28.307798 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 06:54:28.309754 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 06:54:28.310514 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 06:54:28.310904 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:54:28.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.311818 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 06:54:28.312216 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:54:28.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.313069 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:54:28.313103 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:54:28.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.325859 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 06:54:28.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.326746 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 06:54:28.328683 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 06:54:28.329239 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 06:54:28.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:28.329927 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 06:54:28.331413 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 06:54:28.351087 systemd[1]: Switching root. Jan 20 06:54:28.397512 systemd-journald[340]: Journal stopped Jan 20 06:54:29.697081 systemd-journald[340]: Received SIGTERM from PID 1 (systemd). Jan 20 06:54:29.697155 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 06:54:29.697173 kernel: SELinux: policy capability open_perms=1 Jan 20 06:54:29.697184 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 06:54:29.697195 kernel: SELinux: policy capability always_check_network=0 Jan 20 06:54:29.697213 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 06:54:29.697224 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 06:54:29.697235 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 06:54:29.697246 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 06:54:29.697257 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 06:54:29.697285 systemd[1]: Successfully loaded SELinux policy in 77.871ms. Jan 20 06:54:29.697305 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.339ms. Jan 20 06:54:29.697327 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 06:54:29.697342 systemd[1]: Detected virtualization kvm. Jan 20 06:54:29.697354 systemd[1]: Detected architecture x86-64. Jan 20 06:54:29.697366 systemd[1]: Detected first boot. Jan 20 06:54:29.697378 systemd[1]: Hostname set to . Jan 20 06:54:29.697391 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 06:54:29.697404 zram_generator::config[1205]: No configuration found. Jan 20 06:54:29.697429 kernel: Guest personality initialized and is inactive Jan 20 06:54:29.697441 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 06:54:29.697453 kernel: Initialized host personality Jan 20 06:54:29.697464 kernel: NET: Registered PF_VSOCK protocol family Jan 20 06:54:29.697475 systemd[1]: Populated /etc with preset unit settings. Jan 20 06:54:29.697493 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 06:54:29.697505 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 06:54:29.697517 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 06:54:29.697534 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 06:54:29.697546 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 06:54:29.697557 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 06:54:29.697568 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 06:54:29.697580 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 06:54:29.697594 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 06:54:29.697605 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 06:54:29.697617 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 06:54:29.697630 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:54:29.697645 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:54:29.697657 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 06:54:29.697669 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 06:54:29.697682 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 06:54:29.697694 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 06:54:29.697706 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 06:54:29.697718 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:54:29.697729 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:54:29.697742 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 06:54:29.697754 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 06:54:29.697765 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 06:54:29.697778 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 06:54:29.697790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:54:29.697804 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 06:54:29.697816 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 06:54:29.697829 systemd[1]: Reached target slices.target - Slice Units. Jan 20 06:54:29.697840 systemd[1]: Reached target swap.target - Swaps. Jan 20 06:54:29.697852 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 06:54:29.697864 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 06:54:29.697875 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 06:54:29.697887 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:54:29.697898 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 06:54:29.697911 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:54:29.697922 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 06:54:29.697935 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 06:54:29.697947 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 06:54:29.697958 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:54:29.697969 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 06:54:29.697981 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 06:54:29.697994 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 06:54:29.698006 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 06:54:29.698018 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:54:29.698029 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 06:54:29.698040 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 06:54:29.698052 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 06:54:29.698064 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 06:54:29.698077 systemd[1]: Reached target machines.target - Containers. Jan 20 06:54:29.698089 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 06:54:29.698102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:54:29.698113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 06:54:29.698126 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 06:54:29.698138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:54:29.698148 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 06:54:29.698160 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:54:29.698171 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 06:54:29.698185 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:54:29.698198 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 06:54:29.698210 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 06:54:29.698222 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 06:54:29.698233 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 06:54:29.698244 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 06:54:29.698256 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:54:29.698282 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 06:54:29.698296 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 06:54:29.698308 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 06:54:29.698319 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 06:54:29.698330 kernel: fuse: init (API version 7.41) Jan 20 06:54:29.698341 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 06:54:29.698353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 06:54:29.698365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:54:29.698378 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 06:54:29.698389 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 06:54:29.698402 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 06:54:29.698413 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 06:54:29.698426 kernel: ACPI: bus type drm_connector registered Jan 20 06:54:29.698437 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 06:54:29.698449 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 06:54:29.698460 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:54:29.698471 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 06:54:29.698483 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 06:54:29.698494 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:54:29.698507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:54:29.698518 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 06:54:29.698530 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 06:54:29.698541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:54:29.698552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:54:29.698564 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 06:54:29.698575 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 06:54:29.698588 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:54:29.698599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:54:29.698611 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:54:29.698623 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 06:54:29.698636 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 06:54:29.698648 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 06:54:29.698659 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 06:54:29.698671 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 06:54:29.698684 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 06:54:29.698696 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 06:54:29.698707 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 06:54:29.698719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:54:29.698731 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:54:29.698764 systemd-journald[1278]: Collecting audit messages is enabled. Jan 20 06:54:29.698789 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 06:54:29.698801 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 06:54:29.698814 systemd-journald[1278]: Journal started Jan 20 06:54:29.698837 systemd-journald[1278]: Runtime Journal (/run/log/journal/cff2c703321b425aafeba8d1e85865ed) is 8M, max 77.9M, 69.9M free. Jan 20 06:54:29.422000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 06:54:29.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.544000 audit: BPF prog-id=14 op=UNLOAD Jan 20 06:54:29.544000 audit: BPF prog-id=13 op=UNLOAD Jan 20 06:54:29.546000 audit: BPF prog-id=15 op=LOAD Jan 20 06:54:29.546000 audit: BPF prog-id=16 op=LOAD Jan 20 06:54:29.546000 audit: BPF prog-id=17 op=LOAD Jan 20 06:54:29.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.689000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 06:54:29.689000 audit[1278]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd081da910 a2=4000 a3=0 items=0 ppid=1 pid=1278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:29.689000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 06:54:29.334613 systemd[1]: Queued start job for default target multi-user.target. Jan 20 06:54:29.360736 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 06:54:29.361229 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 06:54:29.707185 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 06:54:29.707227 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 06:54:29.712645 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 06:54:29.723092 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 06:54:29.723157 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 06:54:29.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.732095 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 06:54:29.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.737147 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 06:54:29.742439 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 06:54:29.746480 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 06:54:29.748839 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 06:54:29.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.757486 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 06:54:29.770430 systemd-journald[1278]: Time spent on flushing to /var/log/journal/cff2c703321b425aafeba8d1e85865ed is 84.352ms for 1846 entries. Jan 20 06:54:29.770430 systemd-journald[1278]: System Journal (/var/log/journal/cff2c703321b425aafeba8d1e85865ed) is 8M, max 588.1M, 580.1M free. Jan 20 06:54:29.871553 systemd-journald[1278]: Received client request to flush runtime journal. Jan 20 06:54:29.871607 kernel: loop1: detected capacity change from 0 to 224512 Jan 20 06:54:29.871634 kernel: loop2: detected capacity change from 0 to 111560 Jan 20 06:54:29.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.860000 audit: BPF prog-id=18 op=LOAD Jan 20 06:54:29.860000 audit: BPF prog-id=19 op=LOAD Jan 20 06:54:29.860000 audit: BPF prog-id=20 op=LOAD Jan 20 06:54:29.863000 audit: BPF prog-id=21 op=LOAD Jan 20 06:54:29.794491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:54:29.809751 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:54:29.858706 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 06:54:29.862419 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 06:54:29.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.877000 audit: BPF prog-id=22 op=LOAD Jan 20 06:54:29.877000 audit: BPF prog-id=23 op=LOAD Jan 20 06:54:29.878000 audit: BPF prog-id=24 op=LOAD Jan 20 06:54:29.880000 audit: BPF prog-id=25 op=LOAD Jan 20 06:54:29.880000 audit: BPF prog-id=26 op=LOAD Jan 20 06:54:29.880000 audit: BPF prog-id=27 op=LOAD Jan 20 06:54:29.866779 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 06:54:29.869615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 06:54:29.874603 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 06:54:29.879206 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 06:54:29.882410 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 06:54:29.889224 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 06:54:29.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.915304 kernel: loop3: detected capacity change from 0 to 1656 Jan 20 06:54:29.927240 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jan 20 06:54:29.927255 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jan 20 06:54:29.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.934711 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:54:29.950917 systemd-nsresourced[1348]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 06:54:29.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.953967 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 06:54:29.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:29.955600 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 06:54:29.958299 kernel: loop4: detected capacity change from 0 to 50784 Jan 20 06:54:30.003288 kernel: loop5: detected capacity change from 0 to 224512 Jan 20 06:54:30.040342 kernel: loop6: detected capacity change from 0 to 111560 Jan 20 06:54:30.070816 kernel: loop7: detected capacity change from 0 to 1656 Jan 20 06:54:30.070190 systemd-oomd[1342]: No swap; memory pressure usage will be degraded Jan 20 06:54:30.070745 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 06:54:30.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.074986 systemd-resolved[1343]: Positive Trust Anchors: Jan 20 06:54:30.075217 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 06:54:30.075254 systemd-resolved[1343]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 06:54:30.075335 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 06:54:30.080295 kernel: loop1: detected capacity change from 0 to 50784 Jan 20 06:54:30.101316 systemd-resolved[1343]: Using system hostname 'ci-4585-0-0-n-f719bce5cf'. Jan 20 06:54:30.102572 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 06:54:30.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.103231 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:54:30.105390 (sd-merge)[1371]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Jan 20 06:54:30.109501 (sd-merge)[1371]: Merged extensions into '/usr'. Jan 20 06:54:30.114419 systemd[1]: Reload requested from client PID 1310 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 06:54:30.114434 systemd[1]: Reloading... Jan 20 06:54:30.198303 zram_generator::config[1401]: No configuration found. Jan 20 06:54:30.363130 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 06:54:30.363353 systemd[1]: Reloading finished in 248 ms. Jan 20 06:54:30.379235 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 06:54:30.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.380079 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 06:54:30.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.384553 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 06:54:30.385919 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 06:54:30.395205 systemd[1]: Starting ensure-sysext.service... Jan 20 06:54:30.398393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 06:54:30.398000 audit: BPF prog-id=8 op=UNLOAD Jan 20 06:54:30.398000 audit: BPF prog-id=7 op=UNLOAD Jan 20 06:54:30.399000 audit: BPF prog-id=28 op=LOAD Jan 20 06:54:30.399000 audit: BPF prog-id=29 op=LOAD Jan 20 06:54:30.401795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:54:30.403000 audit: BPF prog-id=30 op=LOAD Jan 20 06:54:30.404000 audit: BPF prog-id=15 op=UNLOAD Jan 20 06:54:30.404000 audit: BPF prog-id=31 op=LOAD Jan 20 06:54:30.404000 audit: BPF prog-id=32 op=LOAD Jan 20 06:54:30.404000 audit: BPF prog-id=16 op=UNLOAD Jan 20 06:54:30.404000 audit: BPF prog-id=17 op=UNLOAD Jan 20 06:54:30.404000 audit: BPF prog-id=33 op=LOAD Jan 20 06:54:30.405000 audit: BPF prog-id=22 op=UNLOAD Jan 20 06:54:30.405000 audit: BPF prog-id=34 op=LOAD Jan 20 06:54:30.405000 audit: BPF prog-id=35 op=LOAD Jan 20 06:54:30.405000 audit: BPF prog-id=23 op=UNLOAD Jan 20 06:54:30.405000 audit: BPF prog-id=24 op=UNLOAD Jan 20 06:54:30.407000 audit: BPF prog-id=36 op=LOAD Jan 20 06:54:30.407000 audit: BPF prog-id=25 op=UNLOAD Jan 20 06:54:30.407000 audit: BPF prog-id=37 op=LOAD Jan 20 06:54:30.407000 audit: BPF prog-id=38 op=LOAD Jan 20 06:54:30.407000 audit: BPF prog-id=26 op=UNLOAD Jan 20 06:54:30.407000 audit: BPF prog-id=27 op=UNLOAD Jan 20 06:54:30.407000 audit: BPF prog-id=39 op=LOAD Jan 20 06:54:30.407000 audit: BPF prog-id=21 op=UNLOAD Jan 20 06:54:30.408000 audit: BPF prog-id=40 op=LOAD Jan 20 06:54:30.408000 audit: BPF prog-id=18 op=UNLOAD Jan 20 06:54:30.408000 audit: BPF prog-id=41 op=LOAD Jan 20 06:54:30.408000 audit: BPF prog-id=42 op=LOAD Jan 20 06:54:30.409000 audit: BPF prog-id=19 op=UNLOAD Jan 20 06:54:30.409000 audit: BPF prog-id=20 op=UNLOAD Jan 20 06:54:30.412745 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 06:54:30.413368 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 06:54:30.417501 systemd[1]: Reload requested from client PID 1446 ('systemctl') (unit ensure-sysext.service)... Jan 20 06:54:30.417516 systemd[1]: Reloading... Jan 20 06:54:30.422732 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 06:54:30.422757 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 06:54:30.425145 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 06:54:30.426129 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Jan 20 06:54:30.426178 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Jan 20 06:54:30.439829 systemd-tmpfiles[1447]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 06:54:30.439838 systemd-tmpfiles[1447]: Skipping /boot Jan 20 06:54:30.442762 systemd-udevd[1448]: Using default interface naming scheme 'v257'. Jan 20 06:54:30.451480 systemd-tmpfiles[1447]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 06:54:30.451491 systemd-tmpfiles[1447]: Skipping /boot Jan 20 06:54:30.510377 zram_generator::config[1492]: No configuration found. Jan 20 06:54:30.593301 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 06:54:30.616324 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 Jan 20 06:54:30.640288 kernel: ACPI: button: Power Button [PWRF] Jan 20 06:54:30.741372 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 06:54:30.741652 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 06:54:30.741794 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 06:54:30.754163 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 06:54:30.755474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 06:54:30.756226 systemd[1]: Reloading finished in 338 ms. Jan 20 06:54:30.766312 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:54:30.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.768000 audit: BPF prog-id=43 op=LOAD Jan 20 06:54:30.768000 audit: BPF prog-id=40 op=UNLOAD Jan 20 06:54:30.768000 audit: BPF prog-id=44 op=LOAD Jan 20 06:54:30.768000 audit: BPF prog-id=45 op=LOAD Jan 20 06:54:30.768000 audit: BPF prog-id=41 op=UNLOAD Jan 20 06:54:30.768000 audit: BPF prog-id=42 op=UNLOAD Jan 20 06:54:30.768000 audit: BPF prog-id=46 op=LOAD Jan 20 06:54:30.769000 audit: BPF prog-id=36 op=UNLOAD Jan 20 06:54:30.769000 audit: BPF prog-id=47 op=LOAD Jan 20 06:54:30.769000 audit: BPF prog-id=48 op=LOAD Jan 20 06:54:30.769000 audit: BPF prog-id=37 op=UNLOAD Jan 20 06:54:30.769000 audit: BPF prog-id=38 op=UNLOAD Jan 20 06:54:30.769000 audit: BPF prog-id=49 op=LOAD Jan 20 06:54:30.769000 audit: BPF prog-id=39 op=UNLOAD Jan 20 06:54:30.770000 audit: BPF prog-id=50 op=LOAD Jan 20 06:54:30.771000 audit: BPF prog-id=30 op=UNLOAD Jan 20 06:54:30.771000 audit: BPF prog-id=51 op=LOAD Jan 20 06:54:30.771000 audit: BPF prog-id=52 op=LOAD Jan 20 06:54:30.771000 audit: BPF prog-id=31 op=UNLOAD Jan 20 06:54:30.771000 audit: BPF prog-id=32 op=UNLOAD Jan 20 06:54:30.771000 audit: BPF prog-id=53 op=LOAD Jan 20 06:54:30.771000 audit: BPF prog-id=54 op=LOAD Jan 20 06:54:30.771000 audit: BPF prog-id=28 op=UNLOAD Jan 20 06:54:30.771000 audit: BPF prog-id=29 op=UNLOAD Jan 20 06:54:30.772000 audit: BPF prog-id=55 op=LOAD Jan 20 06:54:30.772000 audit: BPF prog-id=33 op=UNLOAD Jan 20 06:54:30.772000 audit: BPF prog-id=56 op=LOAD Jan 20 06:54:30.772000 audit: BPF prog-id=57 op=LOAD Jan 20 06:54:30.772000 audit: BPF prog-id=34 op=UNLOAD Jan 20 06:54:30.772000 audit: BPF prog-id=35 op=UNLOAD Jan 20 06:54:30.775504 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:54:30.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.812294 systemd[1]: Finished ensure-sysext.service. Jan 20 06:54:30.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.814107 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:54:30.817466 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 06:54:30.820510 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 06:54:30.821292 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 20 06:54:30.821875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:54:30.824475 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:54:30.829024 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 06:54:30.833846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:54:30.840692 kernel: Console: switching to colour dummy device 80x25 Jan 20 06:54:30.841483 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:54:30.850313 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 20 06:54:30.850874 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 20 06:54:30.851124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:54:30.851292 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:54:30.854522 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 20 06:54:30.854607 kernel: [drm] features: -context_init Jan 20 06:54:30.853591 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 06:54:30.855868 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 06:54:30.856357 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:54:30.858019 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 06:54:30.858000 audit: BPF prog-id=58 op=LOAD Jan 20 06:54:30.861182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 06:54:30.861282 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 06:54:30.869491 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 06:54:30.869587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:54:30.870200 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:54:30.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.870411 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:54:30.907000 audit[1584]: SYSTEM_BOOT pid=1584 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.913284 kernel: [drm] number of scanouts: 1 Jan 20 06:54:30.913456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:54:30.914161 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 06:54:30.914367 kernel: [drm] number of cap sets: 0 Jan 20 06:54:30.914814 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 06:54:30.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.916633 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 20 06:54:30.918311 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 06:54:30.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.924049 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 20 06:54:30.924129 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 06:54:30.956213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:54:30.956444 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:54:30.972749 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 20 06:54:30.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.976461 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:54:30.977217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:54:30.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.978372 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 06:54:30.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.985096 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 06:54:30.985319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 06:54:30.989283 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 06:54:30.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.994135 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:54:30.995153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:54:30.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:30.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:31.001440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:54:31.013019 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 20 06:54:31.013080 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 20 06:54:31.026547 kernel: PTP clock support registered Jan 20 06:54:31.031000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 06:54:31.031000 audit[1615]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8f66a8e0 a2=420 a3=0 items=0 ppid=1569 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:31.031000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:54:31.032085 augenrules[1615]: No rules Jan 20 06:54:31.032702 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 06:54:31.033274 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 06:54:31.033809 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 20 06:54:31.034428 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 20 06:54:31.080063 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 06:54:31.081878 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 06:54:31.113211 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:54:31.113449 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:54:31.124195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:54:31.144338 systemd-networkd[1581]: lo: Link UP Jan 20 06:54:31.144345 systemd-networkd[1581]: lo: Gained carrier Jan 20 06:54:31.147556 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 06:54:31.149047 systemd[1]: Reached target network.target - Network. Jan 20 06:54:31.150060 systemd-networkd[1581]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:54:31.150068 systemd-networkd[1581]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 06:54:31.150564 systemd-networkd[1581]: eth0: Link UP Jan 20 06:54:31.150755 systemd-networkd[1581]: eth0: Gained carrier Jan 20 06:54:31.150771 systemd-networkd[1581]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:54:31.154331 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 06:54:31.159407 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 06:54:31.165319 systemd-networkd[1581]: eth0: DHCPv4 address 10.0.0.92/25, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 06:54:31.204848 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 06:54:31.247890 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:54:31.747002 ldconfig[1577]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 06:54:31.754062 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 06:54:31.756216 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 06:54:31.780347 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 06:54:31.782492 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 06:54:31.783021 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 06:54:31.790124 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 06:54:31.790571 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 06:54:31.791092 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 06:54:31.791598 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 06:54:31.791984 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 06:54:31.792464 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 06:54:31.792821 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 06:54:31.793170 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 06:54:31.793195 systemd[1]: Reached target paths.target - Path Units. Jan 20 06:54:31.794729 systemd[1]: Reached target timers.target - Timer Units. Jan 20 06:54:31.797450 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 06:54:31.799102 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 06:54:31.803401 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 06:54:31.805141 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 06:54:31.806381 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 06:54:31.815041 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 06:54:31.815929 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 06:54:31.817092 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 06:54:31.819521 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 06:54:31.819914 systemd[1]: Reached target basic.target - Basic System. Jan 20 06:54:31.820385 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 06:54:31.820408 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 06:54:31.823765 systemd[1]: Starting chronyd.service - NTP client/server... Jan 20 06:54:31.829362 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 06:54:31.832090 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 06:54:31.836344 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 06:54:31.840422 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 06:54:31.842136 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 06:54:31.854400 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 06:54:31.854832 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 06:54:31.857314 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:31.857411 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 06:54:31.864453 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 06:54:31.875046 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 06:54:31.881092 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 06:54:31.884246 jq[1646]: false Jan 20 06:54:31.885781 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 06:54:31.892347 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Refreshing passwd entry cache Jan 20 06:54:31.892351 oslogin_cache_refresh[1649]: Refreshing passwd entry cache Jan 20 06:54:31.900159 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 06:54:31.903303 oslogin_cache_refresh[1649]: Failure getting users, quitting Jan 20 06:54:31.903495 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Failure getting users, quitting Jan 20 06:54:31.903495 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 06:54:31.903495 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Refreshing group entry cache Jan 20 06:54:31.901814 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 06:54:31.903321 oslogin_cache_refresh[1649]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 06:54:31.902322 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 06:54:31.903364 oslogin_cache_refresh[1649]: Refreshing group entry cache Jan 20 06:54:31.905417 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 06:54:31.912446 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Failure getting groups, quitting Jan 20 06:54:31.912446 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 06:54:31.910006 oslogin_cache_refresh[1649]: Failure getting groups, quitting Jan 20 06:54:31.910018 oslogin_cache_refresh[1649]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 06:54:31.913838 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 06:54:31.919965 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 06:54:31.920889 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 06:54:31.929949 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 06:54:31.930394 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 06:54:31.931205 extend-filesystems[1647]: Found /dev/vda6 Jan 20 06:54:31.935693 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 06:54:31.936590 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 06:54:31.936782 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 06:54:31.939007 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 06:54:31.939204 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 06:54:31.945453 extend-filesystems[1647]: Found /dev/vda9 Jan 20 06:54:31.949350 chronyd[1641]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 20 06:54:31.955374 extend-filesystems[1647]: Checking size of /dev/vda9 Jan 20 06:54:31.956908 jq[1662]: true Jan 20 06:54:31.956977 chronyd[1641]: Loaded seccomp filter (level 2) Jan 20 06:54:31.958498 systemd[1]: Started chronyd.service - NTP client/server. Jan 20 06:54:31.987617 extend-filesystems[1647]: Resized partition /dev/vda9 Jan 20 06:54:31.988179 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 06:54:31.992918 jq[1686]: true Jan 20 06:54:32.003190 extend-filesystems[1696]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 06:54:32.011357 update_engine[1661]: I20260120 06:54:32.008762 1661 main.cc:92] Flatcar Update Engine starting Jan 20 06:54:32.020558 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Jan 20 06:54:32.024947 tar[1670]: linux-amd64/LICENSE Jan 20 06:54:32.040843 tar[1670]: linux-amd64/helm Jan 20 06:54:32.037486 dbus-daemon[1644]: [system] SELinux support is enabled Jan 20 06:54:32.045213 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 06:54:32.051806 update_engine[1661]: I20260120 06:54:32.051616 1661 update_check_scheduler.cc:74] Next update check in 4m45s Jan 20 06:54:32.054367 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 06:54:32.054405 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 06:54:32.056209 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 06:54:32.056235 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 06:54:32.056800 systemd[1]: Started update-engine.service - Update Engine. Jan 20 06:54:32.076135 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 06:54:32.088826 systemd-logind[1660]: New seat seat0. Jan 20 06:54:32.089646 systemd-logind[1660]: Watching system buttons on /dev/input/event3 (Power Button) Jan 20 06:54:32.089667 systemd-logind[1660]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 06:54:32.089869 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 06:54:32.284785 locksmithd[1707]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 06:54:32.310060 bash[1715]: Updated "/home/core/.ssh/authorized_keys" Jan 20 06:54:32.313092 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 06:54:32.317793 systemd[1]: Starting sshkeys.service... Jan 20 06:54:32.331837 containerd[1680]: time="2026-01-20T06:54:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 06:54:32.335744 containerd[1680]: time="2026-01-20T06:54:32.335091205Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 06:54:32.344163 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 20 06:54:32.348037 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 20 06:54:32.375285 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:32.379000 containerd[1680]: time="2026-01-20T06:54:32.378959120Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.963µs" Jan 20 06:54:32.379000 containerd[1680]: time="2026-01-20T06:54:32.378996953Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 06:54:32.379256 containerd[1680]: time="2026-01-20T06:54:32.379239330Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 06:54:32.379294 containerd[1680]: time="2026-01-20T06:54:32.379258222Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 06:54:32.379666 containerd[1680]: time="2026-01-20T06:54:32.379648247Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 06:54:32.379691 containerd[1680]: time="2026-01-20T06:54:32.379670680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380281 containerd[1680]: time="2026-01-20T06:54:32.379722684Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380281 containerd[1680]: time="2026-01-20T06:54:32.379735372Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380281 containerd[1680]: time="2026-01-20T06:54:32.379910089Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380281 containerd[1680]: time="2026-01-20T06:54:32.379924047Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380281 containerd[1680]: time="2026-01-20T06:54:32.379933827Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380281 containerd[1680]: time="2026-01-20T06:54:32.379941442Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380689 containerd[1680]: time="2026-01-20T06:54:32.380671443Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380713 containerd[1680]: time="2026-01-20T06:54:32.380687649Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380762 containerd[1680]: time="2026-01-20T06:54:32.380750844Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380917 containerd[1680]: time="2026-01-20T06:54:32.380903918Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380942 containerd[1680]: time="2026-01-20T06:54:32.380931688Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 06:54:32.380966 containerd[1680]: time="2026-01-20T06:54:32.380942458Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 06:54:32.383382 containerd[1680]: time="2026-01-20T06:54:32.381601227Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 06:54:32.383382 containerd[1680]: time="2026-01-20T06:54:32.382339045Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 06:54:32.383382 containerd[1680]: time="2026-01-20T06:54:32.382408672Z" level=info msg="metadata content store policy set" policy=shared Jan 20 06:54:32.438060 containerd[1680]: time="2026-01-20T06:54:32.438015206Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 06:54:32.438156 containerd[1680]: time="2026-01-20T06:54:32.438083082Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 06:54:32.438413 containerd[1680]: time="2026-01-20T06:54:32.438386224Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 06:54:32.438413 containerd[1680]: time="2026-01-20T06:54:32.438405420Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 06:54:32.438459 containerd[1680]: time="2026-01-20T06:54:32.438417590Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 06:54:32.438459 containerd[1680]: time="2026-01-20T06:54:32.438429150Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 06:54:32.438559 containerd[1680]: time="2026-01-20T06:54:32.438547536Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 06:54:32.438579 containerd[1680]: time="2026-01-20T06:54:32.438560669Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 06:54:32.438579 containerd[1680]: time="2026-01-20T06:54:32.438571386Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 06:54:32.438615 containerd[1680]: time="2026-01-20T06:54:32.438581793Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 06:54:32.438615 containerd[1680]: time="2026-01-20T06:54:32.438592449Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 06:54:32.438800 containerd[1680]: time="2026-01-20T06:54:32.438602620Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 06:54:32.438823 containerd[1680]: time="2026-01-20T06:54:32.438808936Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 06:54:32.438842 containerd[1680]: time="2026-01-20T06:54:32.438828299Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 06:54:32.439379 containerd[1680]: time="2026-01-20T06:54:32.439363629Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 06:54:32.439413 containerd[1680]: time="2026-01-20T06:54:32.439387605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 06:54:32.439413 containerd[1680]: time="2026-01-20T06:54:32.439401726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 06:54:32.439413 containerd[1680]: time="2026-01-20T06:54:32.439411877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 06:54:32.439765 containerd[1680]: time="2026-01-20T06:54:32.439723192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 06:54:32.439765 containerd[1680]: time="2026-01-20T06:54:32.439737984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 06:54:32.439765 containerd[1680]: time="2026-01-20T06:54:32.439748773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 06:54:32.439765 containerd[1680]: time="2026-01-20T06:54:32.439763098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 06:54:32.439834 containerd[1680]: time="2026-01-20T06:54:32.439775261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 06:54:32.439834 containerd[1680]: time="2026-01-20T06:54:32.439786216Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 06:54:32.439834 containerd[1680]: time="2026-01-20T06:54:32.439805715Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 06:54:32.439834 containerd[1680]: time="2026-01-20T06:54:32.439828614Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 06:54:32.439899 containerd[1680]: time="2026-01-20T06:54:32.439878749Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 06:54:32.439899 containerd[1680]: time="2026-01-20T06:54:32.439891583Z" level=info msg="Start snapshots syncer" Jan 20 06:54:32.439934 containerd[1680]: time="2026-01-20T06:54:32.439916339Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 06:54:32.440869 containerd[1680]: time="2026-01-20T06:54:32.440728592Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 06:54:32.440986 containerd[1680]: time="2026-01-20T06:54:32.440959900Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 06:54:32.441045 containerd[1680]: time="2026-01-20T06:54:32.441008726Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 06:54:32.441651 containerd[1680]: time="2026-01-20T06:54:32.441559009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 06:54:32.441651 containerd[1680]: time="2026-01-20T06:54:32.441589668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 06:54:32.441651 containerd[1680]: time="2026-01-20T06:54:32.441601244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 06:54:32.441651 containerd[1680]: time="2026-01-20T06:54:32.441609952Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 06:54:32.441651 containerd[1680]: time="2026-01-20T06:54:32.441629985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 06:54:32.441651 containerd[1680]: time="2026-01-20T06:54:32.441640243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 06:54:32.441762 containerd[1680]: time="2026-01-20T06:54:32.441654914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 06:54:32.441762 containerd[1680]: time="2026-01-20T06:54:32.441665230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 06:54:32.441762 containerd[1680]: time="2026-01-20T06:54:32.441675653Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 06:54:32.441762 containerd[1680]: time="2026-01-20T06:54:32.441711515Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 06:54:32.441762 containerd[1680]: time="2026-01-20T06:54:32.441725876Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 06:54:32.441762 containerd[1680]: time="2026-01-20T06:54:32.441734451Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 06:54:32.441762 containerd[1680]: time="2026-01-20T06:54:32.441746766Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 06:54:32.441762 containerd[1680]: time="2026-01-20T06:54:32.441754767Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 06:54:32.442126 containerd[1680]: time="2026-01-20T06:54:32.442089325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 06:54:32.442126 containerd[1680]: time="2026-01-20T06:54:32.442104727Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 06:54:32.442126 containerd[1680]: time="2026-01-20T06:54:32.442119965Z" level=info msg="runtime interface created" Jan 20 06:54:32.442126 containerd[1680]: time="2026-01-20T06:54:32.442125429Z" level=info msg="created NRI interface" Jan 20 06:54:32.442218 containerd[1680]: time="2026-01-20T06:54:32.442133077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 06:54:32.442218 containerd[1680]: time="2026-01-20T06:54:32.442144072Z" level=info msg="Connect containerd service" Jan 20 06:54:32.442218 containerd[1680]: time="2026-01-20T06:54:32.442172271Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 06:54:32.443950 containerd[1680]: time="2026-01-20T06:54:32.443927219Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 06:54:32.519117 sshd_keygen[1695]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 06:54:32.571292 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 06:54:32.574622 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 06:54:32.576295 systemd[1]: Started sshd@0-10.0.0.92:22-20.161.92.111:54164.service - OpenSSH per-connection server daemon (20.161.92.111:54164). Jan 20 06:54:32.599245 containerd[1680]: time="2026-01-20T06:54:32.599214985Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 06:54:32.599343 containerd[1680]: time="2026-01-20T06:54:32.599279684Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 06:54:32.599343 containerd[1680]: time="2026-01-20T06:54:32.599291650Z" level=info msg="Start subscribing containerd event" Jan 20 06:54:32.599343 containerd[1680]: time="2026-01-20T06:54:32.599311679Z" level=info msg="Start recovering state" Jan 20 06:54:32.599408 containerd[1680]: time="2026-01-20T06:54:32.599390889Z" level=info msg="Start event monitor" Jan 20 06:54:32.599408 containerd[1680]: time="2026-01-20T06:54:32.599403026Z" level=info msg="Start cni network conf syncer for default" Jan 20 06:54:32.599444 containerd[1680]: time="2026-01-20T06:54:32.599408764Z" level=info msg="Start streaming server" Jan 20 06:54:32.599444 containerd[1680]: time="2026-01-20T06:54:32.599420043Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 06:54:32.599444 containerd[1680]: time="2026-01-20T06:54:32.599426701Z" level=info msg="runtime interface starting up..." Jan 20 06:54:32.599444 containerd[1680]: time="2026-01-20T06:54:32.599431835Z" level=info msg="starting plugins..." Jan 20 06:54:32.599444 containerd[1680]: time="2026-01-20T06:54:32.599442495Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 06:54:32.599667 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 06:54:32.604474 containerd[1680]: time="2026-01-20T06:54:32.604302948Z" level=info msg="containerd successfully booted in 0.272794s" Jan 20 06:54:32.605461 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 06:54:32.605661 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 06:54:32.611550 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 06:54:32.617295 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Jan 20 06:54:32.644395 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 06:54:32.649539 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 06:54:32.651118 extend-filesystems[1696]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 06:54:32.651118 extend-filesystems[1696]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 20 06:54:32.651118 extend-filesystems[1696]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Jan 20 06:54:32.653242 extend-filesystems[1647]: Resized filesystem in /dev/vda9 Jan 20 06:54:32.656382 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 06:54:32.657033 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 06:54:32.657941 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 06:54:32.659335 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 06:54:32.768842 tar[1670]: linux-amd64/README.md Jan 20 06:54:32.785635 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 06:54:32.891309 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:33.115450 systemd-networkd[1581]: eth0: Gained IPv6LL Jan 20 06:54:33.117738 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 06:54:33.120147 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 06:54:33.122764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:54:33.126541 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 06:54:33.147607 sshd[1755]: Accepted publickey for core from 20.161.92.111 port 54164 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:54:33.152744 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:54:33.170555 systemd-logind[1660]: New session 1 of user core. Jan 20 06:54:33.172545 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 06:54:33.174787 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 06:54:33.181956 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 06:54:33.207847 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 06:54:33.212011 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 06:54:33.228645 (systemd)[1786]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:54:33.233333 systemd-logind[1660]: New session 2 of user core. Jan 20 06:54:33.350553 systemd[1786]: Queued start job for default target default.target. Jan 20 06:54:33.355650 systemd[1786]: Created slice app.slice - User Application Slice. Jan 20 06:54:33.355697 systemd[1786]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 06:54:33.355712 systemd[1786]: Reached target paths.target - Paths. Jan 20 06:54:33.356086 systemd[1786]: Reached target timers.target - Timers. Jan 20 06:54:33.360416 systemd[1786]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 06:54:33.361375 systemd[1786]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 06:54:33.372689 systemd[1786]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 06:54:33.372900 systemd[1786]: Reached target sockets.target - Sockets. Jan 20 06:54:33.387517 systemd[1786]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 06:54:33.387668 systemd[1786]: Reached target basic.target - Basic System. Jan 20 06:54:33.388026 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 06:54:33.388238 systemd[1786]: Reached target default.target - Main User Target. Jan 20 06:54:33.388299 systemd[1786]: Startup finished in 148ms. Jan 20 06:54:33.394298 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:33.399629 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 06:54:33.712297 systemd[1]: Started sshd@1-10.0.0.92:22-20.161.92.111:54172.service - OpenSSH per-connection server daemon (20.161.92.111:54172). Jan 20 06:54:34.258207 sshd[1801]: Accepted publickey for core from 20.161.92.111 port 54172 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:54:34.260992 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:54:34.267518 systemd-logind[1660]: New session 3 of user core. Jan 20 06:54:34.276561 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 06:54:34.452874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:54:34.462665 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:54:34.564055 sshd[1805]: Connection closed by 20.161.92.111 port 54172 Jan 20 06:54:34.565454 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Jan 20 06:54:34.569724 systemd[1]: sshd@1-10.0.0.92:22-20.161.92.111:54172.service: Deactivated successfully. Jan 20 06:54:34.572404 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 06:54:34.574427 systemd-logind[1660]: Session 3 logged out. Waiting for processes to exit. Jan 20 06:54:34.575913 systemd-logind[1660]: Removed session 3. Jan 20 06:54:34.675491 systemd[1]: Started sshd@2-10.0.0.92:22-20.161.92.111:54186.service - OpenSSH per-connection server daemon (20.161.92.111:54186). Jan 20 06:54:34.902300 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:35.232315 sshd[1821]: Accepted publickey for core from 20.161.92.111 port 54186 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:54:35.233666 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:54:35.241669 systemd-logind[1660]: New session 4 of user core. Jan 20 06:54:35.248947 kubelet[1811]: E0120 06:54:35.248900 1811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:54:35.250754 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 06:54:35.254848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:54:35.255011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:54:35.255480 systemd[1]: kubelet.service: Consumed 985ms CPU time, 264.8M memory peak. Jan 20 06:54:35.415287 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:35.533917 sshd[1829]: Connection closed by 20.161.92.111 port 54186 Jan 20 06:54:35.533779 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Jan 20 06:54:35.540201 systemd[1]: sshd@2-10.0.0.92:22-20.161.92.111:54186.service: Deactivated successfully. Jan 20 06:54:35.543016 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 06:54:35.545049 systemd-logind[1660]: Session 4 logged out. Waiting for processes to exit. Jan 20 06:54:35.547890 systemd-logind[1660]: Removed session 4. Jan 20 06:54:38.915315 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:38.921287 coreos-metadata[1643]: Jan 20 06:54:38.921 WARN failed to locate config-drive, using the metadata service API instead Jan 20 06:54:38.938972 coreos-metadata[1643]: Jan 20 06:54:38.938 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 20 06:54:39.425358 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 06:54:39.435585 coreos-metadata[1729]: Jan 20 06:54:39.435 WARN failed to locate config-drive, using the metadata service API instead Jan 20 06:54:39.447907 coreos-metadata[1729]: Jan 20 06:54:39.447 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 20 06:54:41.303157 coreos-metadata[1729]: Jan 20 06:54:41.303 INFO Fetch successful Jan 20 06:54:41.303157 coreos-metadata[1729]: Jan 20 06:54:41.303 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 20 06:54:41.444793 coreos-metadata[1729]: Jan 20 06:54:41.444 INFO Fetch successful Jan 20 06:54:41.457606 unknown[1729]: wrote ssh authorized keys file for user: core Jan 20 06:54:41.489106 update-ssh-keys[1844]: Updated "/home/core/.ssh/authorized_keys" Jan 20 06:54:41.490984 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 20 06:54:41.492393 systemd[1]: Finished sshkeys.service. Jan 20 06:54:41.605463 coreos-metadata[1643]: Jan 20 06:54:41.605 INFO Fetch successful Jan 20 06:54:41.605463 coreos-metadata[1643]: Jan 20 06:54:41.605 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 20 06:54:41.755890 coreos-metadata[1643]: Jan 20 06:54:41.755 INFO Fetch successful Jan 20 06:54:41.755890 coreos-metadata[1643]: Jan 20 06:54:41.755 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 20 06:54:42.073542 coreos-metadata[1643]: Jan 20 06:54:42.073 INFO Fetch successful Jan 20 06:54:42.073542 coreos-metadata[1643]: Jan 20 06:54:42.073 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 20 06:54:42.211018 coreos-metadata[1643]: Jan 20 06:54:42.210 INFO Fetch successful Jan 20 06:54:42.211018 coreos-metadata[1643]: Jan 20 06:54:42.211 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 20 06:54:42.333925 coreos-metadata[1643]: Jan 20 06:54:42.333 INFO Fetch successful Jan 20 06:54:42.333925 coreos-metadata[1643]: Jan 20 06:54:42.333 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 20 06:54:42.463019 coreos-metadata[1643]: Jan 20 06:54:42.462 INFO Fetch successful Jan 20 06:54:42.491252 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 06:54:42.491711 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 06:54:42.491850 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 06:54:42.491942 systemd[1]: Startup finished in 3.656s (kernel) + 12.897s (initrd) + 13.969s (userspace) = 30.523s. Jan 20 06:54:45.507382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 06:54:45.511222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:54:45.641587 systemd[1]: Started sshd@3-10.0.0.92:22-20.161.92.111:34544.service - OpenSSH per-connection server daemon (20.161.92.111:34544). Jan 20 06:54:45.656443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:54:45.662755 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:54:45.708067 kubelet[1862]: E0120 06:54:45.708029 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:54:45.711580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:54:45.712129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:54:45.712500 systemd[1]: kubelet.service: Consumed 163ms CPU time, 110.5M memory peak. Jan 20 06:54:46.157443 sshd[1858]: Accepted publickey for core from 20.161.92.111 port 34544 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:54:46.158657 sshd-session[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:54:46.163308 systemd-logind[1660]: New session 5 of user core. Jan 20 06:54:46.176589 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 06:54:46.445522 sshd[1872]: Connection closed by 20.161.92.111 port 34544 Jan 20 06:54:46.446077 sshd-session[1858]: pam_unix(sshd:session): session closed for user core Jan 20 06:54:46.449550 systemd[1]: sshd@3-10.0.0.92:22-20.161.92.111:34544.service: Deactivated successfully. Jan 20 06:54:46.451322 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 06:54:46.451978 systemd-logind[1660]: Session 5 logged out. Waiting for processes to exit. Jan 20 06:54:46.453236 systemd-logind[1660]: Removed session 5. Jan 20 06:54:46.555652 systemd[1]: Started sshd@4-10.0.0.92:22-20.161.92.111:34560.service - OpenSSH per-connection server daemon (20.161.92.111:34560). Jan 20 06:54:47.080867 sshd[1878]: Accepted publickey for core from 20.161.92.111 port 34560 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:54:47.081966 sshd-session[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:54:47.085884 systemd-logind[1660]: New session 6 of user core. Jan 20 06:54:47.093571 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 06:54:47.371076 sshd[1882]: Connection closed by 20.161.92.111 port 34560 Jan 20 06:54:47.371658 sshd-session[1878]: pam_unix(sshd:session): session closed for user core Jan 20 06:54:47.376118 systemd[1]: sshd@4-10.0.0.92:22-20.161.92.111:34560.service: Deactivated successfully. Jan 20 06:54:47.377556 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 06:54:47.378740 systemd-logind[1660]: Session 6 logged out. Waiting for processes to exit. Jan 20 06:54:47.380619 systemd-logind[1660]: Removed session 6. Jan 20 06:54:47.480479 systemd[1]: Started sshd@5-10.0.0.92:22-20.161.92.111:34576.service - OpenSSH per-connection server daemon (20.161.92.111:34576). Jan 20 06:54:48.021305 sshd[1888]: Accepted publickey for core from 20.161.92.111 port 34576 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:54:48.022098 sshd-session[1888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:54:48.027833 systemd-logind[1660]: New session 7 of user core. Jan 20 06:54:48.034507 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 06:54:48.324776 sshd[1892]: Connection closed by 20.161.92.111 port 34576 Jan 20 06:54:48.324250 sshd-session[1888]: pam_unix(sshd:session): session closed for user core Jan 20 06:54:48.327892 systemd[1]: sshd@5-10.0.0.92:22-20.161.92.111:34576.service: Deactivated successfully. Jan 20 06:54:48.329472 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 06:54:48.330115 systemd-logind[1660]: Session 7 logged out. Waiting for processes to exit. Jan 20 06:54:48.331369 systemd-logind[1660]: Removed session 7. Jan 20 06:54:48.428420 systemd[1]: Started sshd@6-10.0.0.92:22-20.161.92.111:34584.service - OpenSSH per-connection server daemon (20.161.92.111:34584). Jan 20 06:54:48.943313 sshd[1898]: Accepted publickey for core from 20.161.92.111 port 34584 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:54:48.944292 sshd-session[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:54:48.948282 systemd-logind[1660]: New session 8 of user core. Jan 20 06:54:48.955788 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 06:54:49.161929 sudo[1903]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 06:54:49.162192 sudo[1903]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:54:49.175286 sudo[1903]: pam_unix(sudo:session): session closed for user root Jan 20 06:54:49.270632 sshd[1902]: Connection closed by 20.161.92.111 port 34584 Jan 20 06:54:49.270426 sshd-session[1898]: pam_unix(sshd:session): session closed for user core Jan 20 06:54:49.277785 systemd[1]: sshd@6-10.0.0.92:22-20.161.92.111:34584.service: Deactivated successfully. Jan 20 06:54:49.280233 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 06:54:49.281397 systemd-logind[1660]: Session 8 logged out. Waiting for processes to exit. Jan 20 06:54:49.283129 systemd-logind[1660]: Removed session 8. Jan 20 06:54:49.378382 systemd[1]: Started sshd@7-10.0.0.92:22-20.161.92.111:34592.service - OpenSSH per-connection server daemon (20.161.92.111:34592). Jan 20 06:54:49.901849 sshd[1910]: Accepted publickey for core from 20.161.92.111 port 34592 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:54:49.902496 sshd-session[1910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:54:49.906705 systemd-logind[1660]: New session 9 of user core. Jan 20 06:54:49.918695 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 06:54:50.111626 sudo[1916]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 06:54:50.112293 sudo[1916]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:54:50.116016 sudo[1916]: pam_unix(sudo:session): session closed for user root Jan 20 06:54:50.126032 sudo[1915]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 06:54:50.126439 sudo[1915]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:54:50.142084 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 06:54:50.210000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 06:54:50.212366 kernel: kauditd_printk_skb: 186 callbacks suppressed Jan 20 06:54:50.212412 kernel: audit: type=1305 audit(1768892090.210:232): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 06:54:50.210000 audit[1940]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed7697090 a2=420 a3=0 items=0 ppid=1921 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:50.215709 kernel: audit: type=1300 audit(1768892090.210:232): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed7697090 a2=420 a3=0 items=0 ppid=1921 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:50.210000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:54:50.215786 augenrules[1940]: No rules Jan 20 06:54:50.217434 kernel: audit: type=1327 audit(1768892090.210:232): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:54:50.218542 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 06:54:50.218892 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 06:54:50.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:50.221657 sudo[1915]: pam_unix(sudo:session): session closed for user root Jan 20 06:54:50.222298 kernel: audit: type=1130 audit(1768892090.219:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:50.222341 kernel: audit: type=1131 audit(1768892090.219:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:50.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:50.224304 kernel: audit: type=1106 audit(1768892090.221:235): pid=1915 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:54:50.221000 audit[1915]: USER_END pid=1915 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:54:50.221000 audit[1915]: CRED_DISP pid=1915 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:54:50.228827 kernel: audit: type=1104 audit(1768892090.221:236): pid=1915 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:54:50.318335 sshd[1914]: Connection closed by 20.161.92.111 port 34592 Jan 20 06:54:50.318698 sshd-session[1910]: pam_unix(sshd:session): session closed for user core Jan 20 06:54:50.320000 audit[1910]: USER_END pid=1910 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:54:50.328105 kernel: audit: type=1106 audit(1768892090.320:237): pid=1910 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:54:50.328151 kernel: audit: type=1104 audit(1768892090.322:238): pid=1910 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:54:50.322000 audit[1910]: CRED_DISP pid=1910 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:54:50.327597 systemd[1]: sshd@7-10.0.0.92:22-20.161.92.111:34592.service: Deactivated successfully. Jan 20 06:54:50.331416 kernel: audit: type=1131 audit(1768892090.327:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.92:22-20.161.92.111:34592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:50.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.92:22-20.161.92.111:34592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:50.330465 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 06:54:50.333235 systemd-logind[1660]: Session 9 logged out. Waiting for processes to exit. Jan 20 06:54:50.335056 systemd-logind[1660]: Removed session 9. Jan 20 06:54:50.432517 systemd[1]: Started sshd@8-10.0.0.92:22-20.161.92.111:34602.service - OpenSSH per-connection server daemon (20.161.92.111:34602). Jan 20 06:54:50.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.92:22-20.161.92.111:34602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:50.982000 audit[1949]: USER_ACCT pid=1949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:54:50.982715 sshd[1949]: Accepted publickey for core from 20.161.92.111 port 34602 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:54:50.983000 audit[1949]: CRED_ACQ pid=1949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:54:50.983000 audit[1949]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2e07dcf0 a2=3 a3=0 items=0 ppid=1 pid=1949 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:50.983000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:54:50.983930 sshd-session[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:54:50.988314 systemd-logind[1660]: New session 10 of user core. Jan 20 06:54:51.000801 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 06:54:51.003000 audit[1949]: USER_START pid=1949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:54:51.005000 audit[1953]: CRED_ACQ pid=1953 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:54:51.187000 audit[1954]: USER_ACCT pid=1954 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:54:51.187000 audit[1954]: CRED_REFR pid=1954 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:54:51.187000 audit[1954]: USER_START pid=1954 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:54:51.187663 sudo[1954]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 06:54:51.187920 sudo[1954]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:54:51.629424 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 06:54:51.640604 (dockerd)[1973]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 06:54:51.987022 dockerd[1973]: time="2026-01-20T06:54:51.986558945Z" level=info msg="Starting up" Jan 20 06:54:51.987541 dockerd[1973]: time="2026-01-20T06:54:51.987510581Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 06:54:51.998675 dockerd[1973]: time="2026-01-20T06:54:51.998596701Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 06:54:52.045137 dockerd[1973]: time="2026-01-20T06:54:52.044934484Z" level=info msg="Loading containers: start." Jan 20 06:54:52.056300 kernel: Initializing XFRM netlink socket Jan 20 06:54:52.124000 audit[2021]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.124000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffc3523770 a2=0 a3=0 items=0 ppid=1973 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 06:54:52.126000 audit[2023]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.126000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcda9567a0 a2=0 a3=0 items=0 ppid=1973 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 06:54:52.128000 audit[2025]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.128000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffb202040 a2=0 a3=0 items=0 ppid=1973 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 06:54:52.130000 audit[2027]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.130000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffc3e4780 a2=0 a3=0 items=0 ppid=1973 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.130000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 06:54:52.132000 audit[2029]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.132000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3f7440e0 a2=0 a3=0 items=0 ppid=1973 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.132000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 06:54:52.134000 audit[2031]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.134000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe2b98b240 a2=0 a3=0 items=0 ppid=1973 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.134000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:54:52.136000 audit[2033]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.136000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd50553a70 a2=0 a3=0 items=0 ppid=1973 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.136000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 06:54:52.138000 audit[2035]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.138000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffd71e3c00 a2=0 a3=0 items=0 ppid=1973 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.138000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 06:54:52.170000 audit[2038]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.170000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffcbca3c6a0 a2=0 a3=0 items=0 ppid=1973 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.170000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 06:54:52.172000 audit[2040]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.172000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc40786e40 a2=0 a3=0 items=0 ppid=1973 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 06:54:52.174000 audit[2042]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.174000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffef876600 a2=0 a3=0 items=0 ppid=1973 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.174000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 06:54:52.176000 audit[2044]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.176000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd1b025b00 a2=0 a3=0 items=0 ppid=1973 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:54:52.178000 audit[2046]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.178000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff4fef63e0 a2=0 a3=0 items=0 ppid=1973 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 06:54:52.221000 audit[2076]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.221000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd6bdf3cc0 a2=0 a3=0 items=0 ppid=1973 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.221000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 06:54:52.223000 audit[2078]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.223000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff16cad9f0 a2=0 a3=0 items=0 ppid=1973 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 06:54:52.227000 audit[2080]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.227000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffe8fb390 a2=0 a3=0 items=0 ppid=1973 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 06:54:52.229000 audit[2082]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.229000 audit[2082]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda807dee0 a2=0 a3=0 items=0 ppid=1973 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.229000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 06:54:52.231000 audit[2084]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.231000 audit[2084]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea5366b90 a2=0 a3=0 items=0 ppid=1973 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 06:54:52.232000 audit[2086]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.232000 audit[2086]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffde7d5bc0 a2=0 a3=0 items=0 ppid=1973 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:54:52.234000 audit[2088]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.234000 audit[2088]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc42981ef0 a2=0 a3=0 items=0 ppid=1973 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.234000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 06:54:52.236000 audit[2090]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.236000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd9952d360 a2=0 a3=0 items=0 ppid=1973 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.236000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 06:54:52.239000 audit[2092]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.239000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff6e9164b0 a2=0 a3=0 items=0 ppid=1973 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.239000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 06:54:52.241000 audit[2094]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.241000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe2dec4d30 a2=0 a3=0 items=0 ppid=1973 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 06:54:52.243000 audit[2096]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.243000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff99599610 a2=0 a3=0 items=0 ppid=1973 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 06:54:52.245000 audit[2098]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.245000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff7c163b20 a2=0 a3=0 items=0 ppid=1973 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:54:52.247000 audit[2100]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.247000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe2b35c370 a2=0 a3=0 items=0 ppid=1973 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.247000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 06:54:52.252000 audit[2105]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.252000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb79464b0 a2=0 a3=0 items=0 ppid=1973 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 06:54:52.254000 audit[2107]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.254000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe41b70ba0 a2=0 a3=0 items=0 ppid=1973 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.254000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 06:54:52.256000 audit[2109]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.256000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe141fcd00 a2=0 a3=0 items=0 ppid=1973 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 06:54:52.258000 audit[2111]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.258000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffff128070 a2=0 a3=0 items=0 ppid=1973 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 06:54:52.260000 audit[2113]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.260000 audit[2113]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe36095510 a2=0 a3=0 items=0 ppid=1973 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 06:54:52.262000 audit[2115]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:54:52.262000 audit[2115]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffed04ab980 a2=0 a3=0 items=0 ppid=1973 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 06:54:52.287000 audit[2120]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.287000 audit[2120]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff59ce8bc0 a2=0 a3=0 items=0 ppid=1973 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.287000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 06:54:52.289000 audit[2122]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.289000 audit[2122]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffff67a4f90 a2=0 a3=0 items=0 ppid=1973 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 06:54:52.298000 audit[2130]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.298000 audit[2130]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc1d6d01d0 a2=0 a3=0 items=0 ppid=1973 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.298000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 06:54:52.309000 audit[2136]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.309000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc9de18c60 a2=0 a3=0 items=0 ppid=1973 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.309000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 06:54:52.311000 audit[2138]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.311000 audit[2138]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fffebc743e0 a2=0 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.311000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 06:54:52.313000 audit[2140]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.313000 audit[2140]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc478e1d90 a2=0 a3=0 items=0 ppid=1973 pid=2140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.313000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 06:54:52.315000 audit[2142]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.315000 audit[2142]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd1751e490 a2=0 a3=0 items=0 ppid=1973 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.315000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 06:54:52.317000 audit[2144]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:54:52.317000 audit[2144]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc1c4d1580 a2=0 a3=0 items=0 ppid=1973 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:54:52.317000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 06:54:52.318225 systemd-networkd[1581]: docker0: Link UP Jan 20 06:54:52.325339 dockerd[1973]: time="2026-01-20T06:54:52.325291770Z" level=info msg="Loading containers: done." Jan 20 06:54:52.348685 dockerd[1973]: time="2026-01-20T06:54:52.348612784Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 06:54:52.348838 dockerd[1973]: time="2026-01-20T06:54:52.348706173Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 06:54:52.348838 dockerd[1973]: time="2026-01-20T06:54:52.348798490Z" level=info msg="Initializing buildkit" Jan 20 06:54:52.383670 dockerd[1973]: time="2026-01-20T06:54:52.383629924Z" level=info msg="Completed buildkit initialization" Jan 20 06:54:52.390923 dockerd[1973]: time="2026-01-20T06:54:52.390870570Z" level=info msg="Daemon has completed initialization" Jan 20 06:54:52.391139 dockerd[1973]: time="2026-01-20T06:54:52.390943387Z" level=info msg="API listen on /run/docker.sock" Jan 20 06:54:52.391394 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 06:54:52.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:53.629657 containerd[1680]: time="2026-01-20T06:54:53.629613674Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 06:54:54.198386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752580614.mount: Deactivated successfully. Jan 20 06:54:55.117880 containerd[1680]: time="2026-01-20T06:54:55.117834939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:54:55.119843 containerd[1680]: time="2026-01-20T06:54:55.119815057Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27402164" Jan 20 06:54:55.121339 containerd[1680]: time="2026-01-20T06:54:55.121298267Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:54:55.124887 containerd[1680]: time="2026-01-20T06:54:55.124848641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:54:55.126547 containerd[1680]: time="2026-01-20T06:54:55.126399547Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.496255338s" Jan 20 06:54:55.126547 containerd[1680]: time="2026-01-20T06:54:55.126427289Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 06:54:55.126962 containerd[1680]: time="2026-01-20T06:54:55.126937891Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 06:54:55.742055 chronyd[1641]: Selected source PHC0 Jan 20 06:54:56.946714 systemd-resolved[1343]: Clock change detected. Flushing caches. Jan 20 06:54:55.742081 chronyd[1641]: System clock wrong by 1.204561 seconds Jan 20 06:54:56.946663 chronyd[1641]: System clock was stepped by 1.204561 seconds Jan 20 06:54:56.947921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 06:54:56.949988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:54:57.100954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:54:57.102843 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 20 06:54:57.102908 kernel: audit: type=1130 audit(1768892097.100:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:57.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:54:57.112286 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:54:57.468912 kubelet[2248]: E0120 06:54:57.468862 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:54:57.471095 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:54:57.471219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:54:57.471765 systemd[1]: kubelet.service: Consumed 141ms CPU time, 110.3M memory peak. Jan 20 06:54:57.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:54:57.475856 kernel: audit: type=1131 audit(1768892097.470:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:54:58.243675 containerd[1680]: time="2026-01-20T06:54:58.243638806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:54:58.245681 containerd[1680]: time="2026-01-20T06:54:58.245655696Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 20 06:54:58.246917 containerd[1680]: time="2026-01-20T06:54:58.246880127Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:54:58.250845 containerd[1680]: time="2026-01-20T06:54:58.250286930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:54:58.250930 containerd[1680]: time="2026-01-20T06:54:58.250898180Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.919263512s" Jan 20 06:54:58.250960 containerd[1680]: time="2026-01-20T06:54:58.250935701Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 06:54:58.251379 containerd[1680]: time="2026-01-20T06:54:58.251361923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 06:54:59.543862 containerd[1680]: time="2026-01-20T06:54:59.543086192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:54:59.544520 containerd[1680]: time="2026-01-20T06:54:59.544342516Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 20 06:54:59.545948 containerd[1680]: time="2026-01-20T06:54:59.545922762Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:54:59.548912 containerd[1680]: time="2026-01-20T06:54:59.548883784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:54:59.549846 containerd[1680]: time="2026-01-20T06:54:59.549809249Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.298370201s" Jan 20 06:54:59.549894 containerd[1680]: time="2026-01-20T06:54:59.549848905Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 06:54:59.550523 containerd[1680]: time="2026-01-20T06:54:59.550505394Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 06:55:00.536094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927169797.mount: Deactivated successfully. Jan 20 06:55:01.498007 containerd[1680]: time="2026-01-20T06:55:01.497514432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:01.498759 containerd[1680]: time="2026-01-20T06:55:01.498741203Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 20 06:55:01.500253 containerd[1680]: time="2026-01-20T06:55:01.500237377Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:01.502390 containerd[1680]: time="2026-01-20T06:55:01.502375974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:01.503008 containerd[1680]: time="2026-01-20T06:55:01.502729613Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.952180348s" Jan 20 06:55:01.503008 containerd[1680]: time="2026-01-20T06:55:01.502759911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 06:55:01.503449 containerd[1680]: time="2026-01-20T06:55:01.503413743Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 06:55:02.154013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1388635169.mount: Deactivated successfully. Jan 20 06:55:02.849987 containerd[1680]: time="2026-01-20T06:55:02.849945191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:02.851186 containerd[1680]: time="2026-01-20T06:55:02.851168263Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jan 20 06:55:02.852878 containerd[1680]: time="2026-01-20T06:55:02.852845308Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:02.856457 containerd[1680]: time="2026-01-20T06:55:02.856435801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:02.857224 containerd[1680]: time="2026-01-20T06:55:02.857188176Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.353674474s" Jan 20 06:55:02.857265 containerd[1680]: time="2026-01-20T06:55:02.857229766Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 06:55:02.857680 containerd[1680]: time="2026-01-20T06:55:02.857643964Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 06:55:03.430879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3465637895.mount: Deactivated successfully. Jan 20 06:55:03.441850 containerd[1680]: time="2026-01-20T06:55:03.441756040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:55:03.444338 containerd[1680]: time="2026-01-20T06:55:03.444292779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 06:55:03.445691 containerd[1680]: time="2026-01-20T06:55:03.445653062Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:55:03.448762 containerd[1680]: time="2026-01-20T06:55:03.448715437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:55:03.449454 containerd[1680]: time="2026-01-20T06:55:03.449240770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 591.502337ms" Jan 20 06:55:03.449454 containerd[1680]: time="2026-01-20T06:55:03.449274191Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 06:55:03.449838 containerd[1680]: time="2026-01-20T06:55:03.449809540Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 06:55:04.227885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319815980.mount: Deactivated successfully. Jan 20 06:55:06.716689 containerd[1680]: time="2026-01-20T06:55:06.715900807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:06.717695 containerd[1680]: time="2026-01-20T06:55:06.717650466Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 20 06:55:06.719367 containerd[1680]: time="2026-01-20T06:55:06.719333620Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:06.722821 containerd[1680]: time="2026-01-20T06:55:06.722776044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:06.723925 containerd[1680]: time="2026-01-20T06:55:06.723895366Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.273998941s" Jan 20 06:55:06.723985 containerd[1680]: time="2026-01-20T06:55:06.723927113Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 06:55:07.653689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 06:55:07.655920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:55:08.668168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:55:08.671846 kernel: audit: type=1130 audit(1768892108.667:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:08.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:08.684447 (kubelet)[2408]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:55:08.732493 kubelet[2408]: E0120 06:55:08.732087 2408 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:55:08.735056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:55:08.735328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:55:08.735988 systemd[1]: kubelet.service: Consumed 159ms CPU time, 110.5M memory peak. Jan 20 06:55:08.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:55:08.739875 kernel: audit: type=1131 audit(1768892108.735:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:55:10.955067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:55:10.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:10.955438 systemd[1]: kubelet.service: Consumed 159ms CPU time, 110.5M memory peak. Jan 20 06:55:10.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:10.958784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:55:10.960924 kernel: audit: type=1130 audit(1768892110.954:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:10.960977 kernel: audit: type=1131 audit(1768892110.954:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:10.988742 systemd[1]: Reload requested from client PID 2425 ('systemctl') (unit session-10.scope)... Jan 20 06:55:10.988757 systemd[1]: Reloading... Jan 20 06:55:11.089851 zram_generator::config[2467]: No configuration found. Jan 20 06:55:11.291654 systemd[1]: Reloading finished in 302 ms. Jan 20 06:55:11.320068 kernel: audit: type=1334 audit(1768892111.311:296): prog-id=63 op=LOAD Jan 20 06:55:11.320169 kernel: audit: type=1334 audit(1768892111.311:297): prog-id=58 op=UNLOAD Jan 20 06:55:11.320189 kernel: audit: type=1334 audit(1768892111.312:298): prog-id=64 op=LOAD Jan 20 06:55:11.311000 audit: BPF prog-id=63 op=LOAD Jan 20 06:55:11.311000 audit: BPF prog-id=58 op=UNLOAD Jan 20 06:55:11.312000 audit: BPF prog-id=64 op=LOAD Jan 20 06:55:11.321381 kernel: audit: type=1334 audit(1768892111.312:299): prog-id=65 op=LOAD Jan 20 06:55:11.312000 audit: BPF prog-id=65 op=LOAD Jan 20 06:55:11.312000 audit: BPF prog-id=53 op=UNLOAD Jan 20 06:55:11.324387 kernel: audit: type=1334 audit(1768892111.312:300): prog-id=53 op=UNLOAD Jan 20 06:55:11.324428 kernel: audit: type=1334 audit(1768892111.312:301): prog-id=54 op=UNLOAD Jan 20 06:55:11.312000 audit: BPF prog-id=54 op=UNLOAD Jan 20 06:55:11.312000 audit: BPF prog-id=66 op=LOAD Jan 20 06:55:11.312000 audit: BPF prog-id=43 op=UNLOAD Jan 20 06:55:11.312000 audit: BPF prog-id=67 op=LOAD Jan 20 06:55:11.312000 audit: BPF prog-id=68 op=LOAD Jan 20 06:55:11.312000 audit: BPF prog-id=44 op=UNLOAD Jan 20 06:55:11.312000 audit: BPF prog-id=45 op=UNLOAD Jan 20 06:55:11.314000 audit: BPF prog-id=69 op=LOAD Jan 20 06:55:11.314000 audit: BPF prog-id=60 op=UNLOAD Jan 20 06:55:11.314000 audit: BPF prog-id=70 op=LOAD Jan 20 06:55:11.314000 audit: BPF prog-id=71 op=LOAD Jan 20 06:55:11.314000 audit: BPF prog-id=61 op=UNLOAD Jan 20 06:55:11.314000 audit: BPF prog-id=62 op=UNLOAD Jan 20 06:55:11.315000 audit: BPF prog-id=72 op=LOAD Jan 20 06:55:11.315000 audit: BPF prog-id=55 op=UNLOAD Jan 20 06:55:11.315000 audit: BPF prog-id=73 op=LOAD Jan 20 06:55:11.315000 audit: BPF prog-id=74 op=LOAD Jan 20 06:55:11.315000 audit: BPF prog-id=56 op=UNLOAD Jan 20 06:55:11.315000 audit: BPF prog-id=57 op=UNLOAD Jan 20 06:55:11.316000 audit: BPF prog-id=75 op=LOAD Jan 20 06:55:11.316000 audit: BPF prog-id=46 op=UNLOAD Jan 20 06:55:11.316000 audit: BPF prog-id=76 op=LOAD Jan 20 06:55:11.316000 audit: BPF prog-id=77 op=LOAD Jan 20 06:55:11.316000 audit: BPF prog-id=47 op=UNLOAD Jan 20 06:55:11.316000 audit: BPF prog-id=48 op=UNLOAD Jan 20 06:55:11.317000 audit: BPF prog-id=78 op=LOAD Jan 20 06:55:11.317000 audit: BPF prog-id=59 op=UNLOAD Jan 20 06:55:11.318000 audit: BPF prog-id=79 op=LOAD Jan 20 06:55:11.318000 audit: BPF prog-id=50 op=UNLOAD Jan 20 06:55:11.318000 audit: BPF prog-id=80 op=LOAD Jan 20 06:55:11.318000 audit: BPF prog-id=81 op=LOAD Jan 20 06:55:11.318000 audit: BPF prog-id=51 op=UNLOAD Jan 20 06:55:11.318000 audit: BPF prog-id=52 op=UNLOAD Jan 20 06:55:11.318000 audit: BPF prog-id=82 op=LOAD Jan 20 06:55:11.318000 audit: BPF prog-id=49 op=UNLOAD Jan 20 06:55:11.335478 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 06:55:11.335561 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 06:55:11.335913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:55:11.335969 systemd[1]: kubelet.service: Consumed 100ms CPU time, 98.5M memory peak. Jan 20 06:55:11.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:55:11.337548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:55:12.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:12.106065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:55:12.118124 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 06:55:12.548263 kubelet[2525]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:55:12.548263 kubelet[2525]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 06:55:12.548263 kubelet[2525]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:55:12.548695 kubelet[2525]: I0120 06:55:12.548318 2525 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 06:55:13.289608 kubelet[2525]: I0120 06:55:13.289549 2525 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 06:55:13.289608 kubelet[2525]: I0120 06:55:13.289580 2525 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 06:55:13.289852 kubelet[2525]: I0120 06:55:13.289841 2525 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 06:55:13.999850 kubelet[2525]: E0120 06:55:13.999802 2525 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:55:14.002942 kubelet[2525]: I0120 06:55:14.002909 2525 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 06:55:14.014781 kubelet[2525]: I0120 06:55:14.014733 2525 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 06:55:14.017841 kubelet[2525]: I0120 06:55:14.017782 2525 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 06:55:14.019585 kubelet[2525]: I0120 06:55:14.019146 2525 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 06:55:14.019585 kubelet[2525]: I0120 06:55:14.019188 2525 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4585-0-0-n-f719bce5cf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 06:55:14.019585 kubelet[2525]: I0120 06:55:14.019398 2525 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 06:55:14.019585 kubelet[2525]: I0120 06:55:14.019408 2525 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 06:55:14.019807 kubelet[2525]: I0120 06:55:14.019530 2525 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:55:14.024485 kubelet[2525]: I0120 06:55:14.024454 2525 kubelet.go:446] "Attempting to sync node with API server" Jan 20 06:55:14.024630 kubelet[2525]: I0120 06:55:14.024621 2525 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 06:55:14.024686 kubelet[2525]: I0120 06:55:14.024681 2525 kubelet.go:352] "Adding apiserver pod source" Jan 20 06:55:14.024734 kubelet[2525]: I0120 06:55:14.024728 2525 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 06:55:14.032341 kubelet[2525]: W0120 06:55:14.032004 2525 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4585-0-0-n-f719bce5cf&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 06:55:14.032341 kubelet[2525]: E0120 06:55:14.032056 2525 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4585-0-0-n-f719bce5cf&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:55:14.032478 kubelet[2525]: W0120 06:55:14.032354 2525 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 06:55:14.032478 kubelet[2525]: E0120 06:55:14.032382 2525 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:55:14.032768 kubelet[2525]: I0120 06:55:14.032747 2525 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 06:55:14.034846 kubelet[2525]: I0120 06:55:14.033101 2525 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 06:55:14.034846 kubelet[2525]: W0120 06:55:14.034017 2525 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 06:55:14.037653 kubelet[2525]: I0120 06:55:14.037618 2525 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 06:55:14.037653 kubelet[2525]: I0120 06:55:14.037656 2525 server.go:1287] "Started kubelet" Jan 20 06:55:14.038843 kubelet[2525]: I0120 06:55:14.037804 2525 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 06:55:14.038843 kubelet[2525]: I0120 06:55:14.038662 2525 server.go:479] "Adding debug handlers to kubelet server" Jan 20 06:55:14.040849 kubelet[2525]: I0120 06:55:14.040814 2525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 06:55:14.042000 audit[2536]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.044337 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 20 06:55:14.044383 kernel: audit: type=1325 audit(1768892114.042:338): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.044905 kubelet[2525]: I0120 06:55:14.044854 2525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 06:55:14.045116 kubelet[2525]: I0120 06:55:14.045107 2525 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 06:55:14.042000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffea81185e0 a2=0 a3=0 items=0 ppid=2525 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.048793 kubelet[2525]: E0120 06:55:14.045300 2525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4585-0-0-n-f719bce5cf.188c5e017733f228 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4585-0-0-n-f719bce5cf,UID:ci-4585-0-0-n-f719bce5cf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4585-0-0-n-f719bce5cf,},FirstTimestamp:2026-01-20 06:55:14.037633576 +0000 UTC m=+1.916094329,LastTimestamp:2026-01-20 06:55:14.037633576 +0000 UTC m=+1.916094329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4585-0-0-n-f719bce5cf,}" Jan 20 06:55:14.050976 kubelet[2525]: I0120 06:55:14.050958 2525 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 06:55:14.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 06:55:14.053508 kernel: audit: type=1300 audit(1768892114.042:338): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffea81185e0 a2=0 a3=0 items=0 ppid=2525 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.053557 kernel: audit: type=1327 audit(1768892114.042:338): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 06:55:14.054402 kubelet[2525]: I0120 06:55:14.054384 2525 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 06:55:14.054732 kubelet[2525]: E0120 06:55:14.054717 2525 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4585-0-0-n-f719bce5cf\" not found" Jan 20 06:55:14.047000 audit[2537]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.055412 kubelet[2525]: I0120 06:55:14.055401 2525 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 06:55:14.055844 kernel: audit: type=1325 audit(1768892114.047:339): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.056006 kubelet[2525]: I0120 06:55:14.055998 2525 reconciler.go:26] "Reconciler: start to sync state" Jan 20 06:55:14.047000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca51e3fe0 a2=0 a3=0 items=0 ppid=2525 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.059068 kernel: audit: type=1300 audit(1768892114.047:339): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca51e3fe0 a2=0 a3=0 items=0 ppid=2525 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.047000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 06:55:14.061000 audit[2539]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.065063 kubelet[2525]: E0120 06:55:14.065044 2525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4585-0-0-n-f719bce5cf?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Jan 20 06:55:14.065570 kernel: audit: type=1327 audit(1768892114.047:339): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 06:55:14.065621 kernel: audit: type=1325 audit(1768892114.061:340): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.067128 kernel: audit: type=1300 audit(1768892114.061:340): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe493055b0 a2=0 a3=0 items=0 ppid=2525 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.061000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe493055b0 a2=0 a3=0 items=0 ppid=2525 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.067244 kubelet[2525]: W0120 06:55:14.066172 2525 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 06:55:14.067244 kubelet[2525]: E0120 06:55:14.066218 2525 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:55:14.070761 kubelet[2525]: I0120 06:55:14.070745 2525 factory.go:221] Registration of the containerd container factory successfully Jan 20 06:55:14.070855 kubelet[2525]: I0120 06:55:14.070850 2525 factory.go:221] Registration of the systemd container factory successfully Jan 20 06:55:14.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:55:14.071022 kubelet[2525]: I0120 06:55:14.071010 2525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 06:55:14.072024 kernel: audit: type=1327 audit(1768892114.061:340): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:55:14.067000 audit[2541]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.074440 kernel: audit: type=1325 audit(1768892114.067:341): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.067000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff86ca8320 a2=0 a3=0 items=0 ppid=2525 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:55:14.081854 kubelet[2525]: E0120 06:55:14.080532 2525 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 06:55:14.086000 audit[2546]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.086000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffe79b5b70 a2=0 a3=0 items=0 ppid=2525 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.086000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 20 06:55:14.087728 kubelet[2525]: I0120 06:55:14.087567 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 06:55:14.089000 audit[2547]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:14.089000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff28db3b90 a2=0 a3=0 items=0 ppid=2525 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.089000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 06:55:14.091047 kubelet[2525]: I0120 06:55:14.091018 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 06:55:14.091047 kubelet[2525]: I0120 06:55:14.091044 2525 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 06:55:14.091121 kubelet[2525]: I0120 06:55:14.091061 2525 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 06:55:14.091121 kubelet[2525]: I0120 06:55:14.091069 2525 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 06:55:14.091121 kubelet[2525]: E0120 06:55:14.091104 2525 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 06:55:14.094257 kubelet[2525]: I0120 06:55:14.094228 2525 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 06:55:14.094257 kubelet[2525]: I0120 06:55:14.094240 2525 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 06:55:14.094257 kubelet[2525]: I0120 06:55:14.094253 2525 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:55:14.094666 kubelet[2525]: W0120 06:55:14.094652 2525 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 06:55:14.094788 kubelet[2525]: E0120 06:55:14.094775 2525 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:55:14.094000 audit[2550]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.094000 audit[2550]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedf07cc60 a2=0 a3=0 items=0 ppid=2525 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.094000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 06:55:14.094000 audit[2551]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:14.094000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa0613020 a2=0 a3=0 items=0 ppid=2525 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.094000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 06:55:14.095000 audit[2553]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.095000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfe2018a0 a2=0 a3=0 items=0 ppid=2525 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.095000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 06:55:14.095000 audit[2554]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:14.095000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb7a410c0 a2=0 a3=0 items=0 ppid=2525 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 06:55:14.096000 audit[2555]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:14.096000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff02b71a70 a2=0 a3=0 items=0 ppid=2525 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 06:55:14.097000 audit[2556]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:14.097000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5dfa6870 a2=0 a3=0 items=0 ppid=2525 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.097000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 06:55:14.101756 kubelet[2525]: I0120 06:55:14.101730 2525 policy_none.go:49] "None policy: Start" Jan 20 06:55:14.101756 kubelet[2525]: I0120 06:55:14.101757 2525 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 06:55:14.101821 kubelet[2525]: I0120 06:55:14.101769 2525 state_mem.go:35] "Initializing new in-memory state store" Jan 20 06:55:14.107904 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 06:55:14.119798 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 06:55:14.123468 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 06:55:14.130724 kubelet[2525]: I0120 06:55:14.130670 2525 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 06:55:14.130876 kubelet[2525]: I0120 06:55:14.130856 2525 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 06:55:14.130908 kubelet[2525]: I0120 06:55:14.130871 2525 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 06:55:14.131374 kubelet[2525]: I0120 06:55:14.131351 2525 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 06:55:14.133794 kubelet[2525]: E0120 06:55:14.133739 2525 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 06:55:14.133794 kubelet[2525]: E0120 06:55:14.133780 2525 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4585-0-0-n-f719bce5cf\" not found" Jan 20 06:55:14.202025 systemd[1]: Created slice kubepods-burstable-pod6e4b701fb4a4e79f892a204f5b24f14a.slice - libcontainer container kubepods-burstable-pod6e4b701fb4a4e79f892a204f5b24f14a.slice. Jan 20 06:55:14.219326 kubelet[2525]: E0120 06:55:14.219257 2525 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585-0-0-n-f719bce5cf\" not found" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.222376 systemd[1]: Created slice kubepods-burstable-pod0ec33335f51937264bbd555aab57e315.slice - libcontainer container kubepods-burstable-pod0ec33335f51937264bbd555aab57e315.slice. Jan 20 06:55:14.223942 kubelet[2525]: E0120 06:55:14.223923 2525 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585-0-0-n-f719bce5cf\" not found" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.226079 systemd[1]: Created slice kubepods-burstable-pode48656fdbb62acc79d75ce1d010b7764.slice - libcontainer container kubepods-burstable-pode48656fdbb62acc79d75ce1d010b7764.slice. Jan 20 06:55:14.227382 kubelet[2525]: E0120 06:55:14.227365 2525 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585-0-0-n-f719bce5cf\" not found" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.233003 kubelet[2525]: I0120 06:55:14.232973 2525 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.233475 kubelet[2525]: E0120 06:55:14.233453 2525 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.259056 kubelet[2525]: I0120 06:55:14.257868 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e4b701fb4a4e79f892a204f5b24f14a-k8s-certs\") pod \"kube-apiserver-ci-4585-0-0-n-f719bce5cf\" (UID: \"6e4b701fb4a4e79f892a204f5b24f14a\") " pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.259056 kubelet[2525]: I0120 06:55:14.257906 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ec33335f51937264bbd555aab57e315-flexvolume-dir\") pod \"kube-controller-manager-ci-4585-0-0-n-f719bce5cf\" (UID: \"0ec33335f51937264bbd555aab57e315\") " pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.259056 kubelet[2525]: I0120 06:55:14.257925 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ec33335f51937264bbd555aab57e315-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4585-0-0-n-f719bce5cf\" (UID: \"0ec33335f51937264bbd555aab57e315\") " pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.259056 kubelet[2525]: I0120 06:55:14.257941 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e48656fdbb62acc79d75ce1d010b7764-kubeconfig\") pod \"kube-scheduler-ci-4585-0-0-n-f719bce5cf\" (UID: \"e48656fdbb62acc79d75ce1d010b7764\") " pod="kube-system/kube-scheduler-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.259056 kubelet[2525]: I0120 06:55:14.257959 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e4b701fb4a4e79f892a204f5b24f14a-ca-certs\") pod \"kube-apiserver-ci-4585-0-0-n-f719bce5cf\" (UID: \"6e4b701fb4a4e79f892a204f5b24f14a\") " pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.259237 kubelet[2525]: I0120 06:55:14.257974 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e4b701fb4a4e79f892a204f5b24f14a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4585-0-0-n-f719bce5cf\" (UID: \"6e4b701fb4a4e79f892a204f5b24f14a\") " pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.259237 kubelet[2525]: I0120 06:55:14.257989 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ec33335f51937264bbd555aab57e315-ca-certs\") pod \"kube-controller-manager-ci-4585-0-0-n-f719bce5cf\" (UID: \"0ec33335f51937264bbd555aab57e315\") " pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.259237 kubelet[2525]: I0120 06:55:14.258003 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ec33335f51937264bbd555aab57e315-k8s-certs\") pod \"kube-controller-manager-ci-4585-0-0-n-f719bce5cf\" (UID: \"0ec33335f51937264bbd555aab57e315\") " pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.259237 kubelet[2525]: I0120 06:55:14.258018 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ec33335f51937264bbd555aab57e315-kubeconfig\") pod \"kube-controller-manager-ci-4585-0-0-n-f719bce5cf\" (UID: \"0ec33335f51937264bbd555aab57e315\") " pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.266630 kubelet[2525]: E0120 06:55:14.266589 2525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4585-0-0-n-f719bce5cf?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Jan 20 06:55:14.435765 kubelet[2525]: I0120 06:55:14.435743 2525 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.436251 kubelet[2525]: E0120 06:55:14.436230 2525 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.521112 containerd[1680]: time="2026-01-20T06:55:14.521014750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4585-0-0-n-f719bce5cf,Uid:6e4b701fb4a4e79f892a204f5b24f14a,Namespace:kube-system,Attempt:0,}" Jan 20 06:55:14.525724 containerd[1680]: time="2026-01-20T06:55:14.525556865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4585-0-0-n-f719bce5cf,Uid:0ec33335f51937264bbd555aab57e315,Namespace:kube-system,Attempt:0,}" Jan 20 06:55:14.528232 containerd[1680]: time="2026-01-20T06:55:14.528193301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4585-0-0-n-f719bce5cf,Uid:e48656fdbb62acc79d75ce1d010b7764,Namespace:kube-system,Attempt:0,}" Jan 20 06:55:14.570306 containerd[1680]: time="2026-01-20T06:55:14.570233000Z" level=info msg="connecting to shim 26a31c3b2d749f94baee5653edaf86c0efccb33137a9879864c9ea26dd07923f" address="unix:///run/containerd/s/db15299e2733bfb2a47489e8a095def998605e35fb1c200033e42bec372752a5" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:55:14.585864 containerd[1680]: time="2026-01-20T06:55:14.585294438Z" level=info msg="connecting to shim 86d8903cf485c6beaa6c4d6652a5fcf78c6abcd5bbc4f65f919a45d67157b0e4" address="unix:///run/containerd/s/2b77d8868af5f020fe4ef9eec8c7e3cc344c0deff8bbb4d51f8fcbb534e0f176" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:55:14.600978 containerd[1680]: time="2026-01-20T06:55:14.600940423Z" level=info msg="connecting to shim 1f63fb49eabf11cd16383ba7c03d73b6af7b0ece4bbfceed58825783efa9a29e" address="unix:///run/containerd/s/9386420be80b30d3545efc1dfd467779ec41836332e34b253974c021ed2f13c1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:55:14.612018 systemd[1]: Started cri-containerd-26a31c3b2d749f94baee5653edaf86c0efccb33137a9879864c9ea26dd07923f.scope - libcontainer container 26a31c3b2d749f94baee5653edaf86c0efccb33137a9879864c9ea26dd07923f. Jan 20 06:55:14.629002 systemd[1]: Started cri-containerd-86d8903cf485c6beaa6c4d6652a5fcf78c6abcd5bbc4f65f919a45d67157b0e4.scope - libcontainer container 86d8903cf485c6beaa6c4d6652a5fcf78c6abcd5bbc4f65f919a45d67157b0e4. Jan 20 06:55:14.640000 audit: BPF prog-id=83 op=LOAD Jan 20 06:55:14.641000 audit: BPF prog-id=84 op=LOAD Jan 20 06:55:14.641000 audit[2616]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2585 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643839303363663438356336626561613663346436363532613566 Jan 20 06:55:14.641000 audit: BPF prog-id=85 op=LOAD Jan 20 06:55:14.641000 audit: BPF prog-id=84 op=UNLOAD Jan 20 06:55:14.641000 audit[2616]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2585 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643839303363663438356336626561613663346436363532613566 Jan 20 06:55:14.641000 audit: BPF prog-id=86 op=LOAD Jan 20 06:55:14.641000 audit[2616]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2585 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643839303363663438356336626561613663346436363532613566 Jan 20 06:55:14.642000 audit: BPF prog-id=87 op=LOAD Jan 20 06:55:14.642000 audit[2616]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2585 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643839303363663438356336626561613663346436363532613566 Jan 20 06:55:14.642000 audit: BPF prog-id=87 op=UNLOAD Jan 20 06:55:14.642000 audit[2616]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2585 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643839303363663438356336626561613663346436363532613566 Jan 20 06:55:14.643000 audit: BPF prog-id=86 op=UNLOAD Jan 20 06:55:14.643000 audit[2616]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2585 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643839303363663438356336626561613663346436363532613566 Jan 20 06:55:14.643000 audit: BPF prog-id=88 op=LOAD Jan 20 06:55:14.643000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2566 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613331633362326437343966393462616565353635336564616638 Jan 20 06:55:14.643000 audit: BPF prog-id=88 op=UNLOAD Jan 20 06:55:14.643000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2566 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613331633362326437343966393462616565353635336564616638 Jan 20 06:55:14.644000 audit: BPF prog-id=89 op=LOAD Jan 20 06:55:14.643000 audit: BPF prog-id=90 op=LOAD Jan 20 06:55:14.644000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2566 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613331633362326437343966393462616565353635336564616638 Jan 20 06:55:14.643000 audit[2616]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2585 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836643839303363663438356336626561613663346436363532613566 Jan 20 06:55:14.644000 audit: BPF prog-id=91 op=LOAD Jan 20 06:55:14.644000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2566 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613331633362326437343966393462616565353635336564616638 Jan 20 06:55:14.645000 audit: BPF prog-id=91 op=UNLOAD Jan 20 06:55:14.645000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2566 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613331633362326437343966393462616565353635336564616638 Jan 20 06:55:14.645000 audit: BPF prog-id=89 op=UNLOAD Jan 20 06:55:14.645000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2566 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613331633362326437343966393462616565353635336564616638 Jan 20 06:55:14.645000 audit: BPF prog-id=92 op=LOAD Jan 20 06:55:14.645000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2566 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613331633362326437343966393462616565353635336564616638 Jan 20 06:55:14.648054 systemd[1]: Started cri-containerd-1f63fb49eabf11cd16383ba7c03d73b6af7b0ece4bbfceed58825783efa9a29e.scope - libcontainer container 1f63fb49eabf11cd16383ba7c03d73b6af7b0ece4bbfceed58825783efa9a29e. Jan 20 06:55:14.667592 kubelet[2525]: E0120 06:55:14.667560 2525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4585-0-0-n-f719bce5cf?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Jan 20 06:55:14.674000 audit: BPF prog-id=93 op=LOAD Jan 20 06:55:14.677000 audit: BPF prog-id=94 op=LOAD Jan 20 06:55:14.677000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000124238 a2=98 a3=0 items=0 ppid=2611 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166363366623439656162663131636431363338336261376330336437 Jan 20 06:55:14.677000 audit: BPF prog-id=94 op=UNLOAD Jan 20 06:55:14.677000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166363366623439656162663131636431363338336261376330336437 Jan 20 06:55:14.677000 audit: BPF prog-id=95 op=LOAD Jan 20 06:55:14.677000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000124488 a2=98 a3=0 items=0 ppid=2611 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166363366623439656162663131636431363338336261376330336437 Jan 20 06:55:14.677000 audit: BPF prog-id=96 op=LOAD Jan 20 06:55:14.677000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000124218 a2=98 a3=0 items=0 ppid=2611 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166363366623439656162663131636431363338336261376330336437 Jan 20 06:55:14.677000 audit: BPF prog-id=96 op=UNLOAD Jan 20 06:55:14.677000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166363366623439656162663131636431363338336261376330336437 Jan 20 06:55:14.677000 audit: BPF prog-id=95 op=UNLOAD Jan 20 06:55:14.677000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166363366623439656162663131636431363338336261376330336437 Jan 20 06:55:14.677000 audit: BPF prog-id=97 op=LOAD Jan 20 06:55:14.677000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001246e8 a2=98 a3=0 items=0 ppid=2611 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166363366623439656162663131636431363338336261376330336437 Jan 20 06:55:14.702032 containerd[1680]: time="2026-01-20T06:55:14.701898673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4585-0-0-n-f719bce5cf,Uid:0ec33335f51937264bbd555aab57e315,Namespace:kube-system,Attempt:0,} returns sandbox id \"86d8903cf485c6beaa6c4d6652a5fcf78c6abcd5bbc4f65f919a45d67157b0e4\"" Jan 20 06:55:14.705692 containerd[1680]: time="2026-01-20T06:55:14.705665424Z" level=info msg="CreateContainer within sandbox \"86d8903cf485c6beaa6c4d6652a5fcf78c6abcd5bbc4f65f919a45d67157b0e4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 06:55:14.714093 containerd[1680]: time="2026-01-20T06:55:14.714066175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4585-0-0-n-f719bce5cf,Uid:6e4b701fb4a4e79f892a204f5b24f14a,Namespace:kube-system,Attempt:0,} returns sandbox id \"26a31c3b2d749f94baee5653edaf86c0efccb33137a9879864c9ea26dd07923f\"" Jan 20 06:55:14.716238 containerd[1680]: time="2026-01-20T06:55:14.716205236Z" level=info msg="CreateContainer within sandbox \"26a31c3b2d749f94baee5653edaf86c0efccb33137a9879864c9ea26dd07923f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 06:55:14.718420 containerd[1680]: time="2026-01-20T06:55:14.718400338Z" level=info msg="Container e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:55:14.728850 containerd[1680]: time="2026-01-20T06:55:14.728452273Z" level=info msg="Container 08f8aabea7f5a8ca1c5f510d7a3e97b98d1996dd071d44debc2e0b5f63485364: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:55:14.734647 containerd[1680]: time="2026-01-20T06:55:14.734587284Z" level=info msg="CreateContainer within sandbox \"86d8903cf485c6beaa6c4d6652a5fcf78c6abcd5bbc4f65f919a45d67157b0e4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79\"" Jan 20 06:55:14.735159 containerd[1680]: time="2026-01-20T06:55:14.735138654Z" level=info msg="StartContainer for \"e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79\"" Jan 20 06:55:14.736075 containerd[1680]: time="2026-01-20T06:55:14.736055790Z" level=info msg="connecting to shim e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79" address="unix:///run/containerd/s/2b77d8868af5f020fe4ef9eec8c7e3cc344c0deff8bbb4d51f8fcbb534e0f176" protocol=ttrpc version=3 Jan 20 06:55:14.739634 containerd[1680]: time="2026-01-20T06:55:14.739592745Z" level=info msg="CreateContainer within sandbox \"26a31c3b2d749f94baee5653edaf86c0efccb33137a9879864c9ea26dd07923f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"08f8aabea7f5a8ca1c5f510d7a3e97b98d1996dd071d44debc2e0b5f63485364\"" Jan 20 06:55:14.740426 containerd[1680]: time="2026-01-20T06:55:14.740331528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4585-0-0-n-f719bce5cf,Uid:e48656fdbb62acc79d75ce1d010b7764,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f63fb49eabf11cd16383ba7c03d73b6af7b0ece4bbfceed58825783efa9a29e\"" Jan 20 06:55:14.740684 containerd[1680]: time="2026-01-20T06:55:14.740628256Z" level=info msg="StartContainer for \"08f8aabea7f5a8ca1c5f510d7a3e97b98d1996dd071d44debc2e0b5f63485364\"" Jan 20 06:55:14.742094 containerd[1680]: time="2026-01-20T06:55:14.742051198Z" level=info msg="connecting to shim 08f8aabea7f5a8ca1c5f510d7a3e97b98d1996dd071d44debc2e0b5f63485364" address="unix:///run/containerd/s/db15299e2733bfb2a47489e8a095def998605e35fb1c200033e42bec372752a5" protocol=ttrpc version=3 Jan 20 06:55:14.742798 containerd[1680]: time="2026-01-20T06:55:14.742724842Z" level=info msg="CreateContainer within sandbox \"1f63fb49eabf11cd16383ba7c03d73b6af7b0ece4bbfceed58825783efa9a29e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 06:55:14.755794 containerd[1680]: time="2026-01-20T06:55:14.755220868Z" level=info msg="Container b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:55:14.758188 systemd[1]: Started cri-containerd-e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79.scope - libcontainer container e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79. Jan 20 06:55:14.774653 containerd[1680]: time="2026-01-20T06:55:14.773508170Z" level=info msg="CreateContainer within sandbox \"1f63fb49eabf11cd16383ba7c03d73b6af7b0ece4bbfceed58825783efa9a29e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75\"" Jan 20 06:55:14.774653 containerd[1680]: time="2026-01-20T06:55:14.774082705Z" level=info msg="StartContainer for \"b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75\"" Jan 20 06:55:14.775319 systemd[1]: Started cri-containerd-08f8aabea7f5a8ca1c5f510d7a3e97b98d1996dd071d44debc2e0b5f63485364.scope - libcontainer container 08f8aabea7f5a8ca1c5f510d7a3e97b98d1996dd071d44debc2e0b5f63485364. Jan 20 06:55:14.779013 containerd[1680]: time="2026-01-20T06:55:14.778987863Z" level=info msg="connecting to shim b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75" address="unix:///run/containerd/s/9386420be80b30d3545efc1dfd467779ec41836332e34b253974c021ed2f13c1" protocol=ttrpc version=3 Jan 20 06:55:14.784000 audit: BPF prog-id=98 op=LOAD Jan 20 06:55:14.784000 audit: BPF prog-id=99 op=LOAD Jan 20 06:55:14.784000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2585 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323063323133386636643533363430396435663436353965383765 Jan 20 06:55:14.784000 audit: BPF prog-id=99 op=UNLOAD Jan 20 06:55:14.784000 audit[2698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2585 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323063323133386636643533363430396435663436353965383765 Jan 20 06:55:14.784000 audit: BPF prog-id=100 op=LOAD Jan 20 06:55:14.784000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2585 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323063323133386636643533363430396435663436353965383765 Jan 20 06:55:14.784000 audit: BPF prog-id=101 op=LOAD Jan 20 06:55:14.784000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2585 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323063323133386636643533363430396435663436353965383765 Jan 20 06:55:14.784000 audit: BPF prog-id=101 op=UNLOAD Jan 20 06:55:14.784000 audit[2698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2585 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323063323133386636643533363430396435663436353965383765 Jan 20 06:55:14.784000 audit: BPF prog-id=100 op=UNLOAD Jan 20 06:55:14.784000 audit[2698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2585 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323063323133386636643533363430396435663436353965383765 Jan 20 06:55:14.784000 audit: BPF prog-id=102 op=LOAD Jan 20 06:55:14.784000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2585 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323063323133386636643533363430396435663436353965383765 Jan 20 06:55:14.801000 audit: BPF prog-id=103 op=LOAD Jan 20 06:55:14.801000 audit: BPF prog-id=104 op=LOAD Jan 20 06:55:14.801000 audit[2704]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2566 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038663861616265613766356138636131633566353130643761336539 Jan 20 06:55:14.802000 audit: BPF prog-id=104 op=UNLOAD Jan 20 06:55:14.802000 audit[2704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2566 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038663861616265613766356138636131633566353130643761336539 Jan 20 06:55:14.802000 audit: BPF prog-id=105 op=LOAD Jan 20 06:55:14.802000 audit[2704]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2566 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038663861616265613766356138636131633566353130643761336539 Jan 20 06:55:14.803000 audit: BPF prog-id=106 op=LOAD Jan 20 06:55:14.803000 audit[2704]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2566 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038663861616265613766356138636131633566353130643761336539 Jan 20 06:55:14.803000 audit: BPF prog-id=106 op=UNLOAD Jan 20 06:55:14.803000 audit[2704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2566 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038663861616265613766356138636131633566353130643761336539 Jan 20 06:55:14.803000 audit: BPF prog-id=105 op=UNLOAD Jan 20 06:55:14.803000 audit[2704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2566 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038663861616265613766356138636131633566353130643761336539 Jan 20 06:55:14.803000 audit: BPF prog-id=107 op=LOAD Jan 20 06:55:14.803000 audit[2704]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2566 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038663861616265613766356138636131633566353130643761336539 Jan 20 06:55:14.808158 systemd[1]: Started cri-containerd-b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75.scope - libcontainer container b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75. Jan 20 06:55:14.832000 audit: BPF prog-id=108 op=LOAD Jan 20 06:55:14.833000 audit: BPF prog-id=109 op=LOAD Jan 20 06:55:14.833000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2611 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653930616461643333306139336264303831303461353434323064 Jan 20 06:55:14.834000 audit: BPF prog-id=109 op=UNLOAD Jan 20 06:55:14.834000 audit[2731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653930616461643333306139336264303831303461353434323064 Jan 20 06:55:14.834000 audit: BPF prog-id=110 op=LOAD Jan 20 06:55:14.834000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2611 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653930616461643333306139336264303831303461353434323064 Jan 20 06:55:14.834000 audit: BPF prog-id=111 op=LOAD Jan 20 06:55:14.834000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2611 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653930616461643333306139336264303831303461353434323064 Jan 20 06:55:14.834000 audit: BPF prog-id=111 op=UNLOAD Jan 20 06:55:14.834000 audit[2731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653930616461643333306139336264303831303461353434323064 Jan 20 06:55:14.835000 audit: BPF prog-id=110 op=UNLOAD Jan 20 06:55:14.835000 audit[2731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653930616461643333306139336264303831303461353434323064 Jan 20 06:55:14.835000 audit: BPF prog-id=112 op=LOAD Jan 20 06:55:14.835000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2611 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:14.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653930616461643333306139336264303831303461353434323064 Jan 20 06:55:14.843065 kubelet[2525]: I0120 06:55:14.841276 2525 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.843065 kubelet[2525]: E0120 06:55:14.843037 2525 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:14.844555 containerd[1680]: time="2026-01-20T06:55:14.844529962Z" level=info msg="StartContainer for \"e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79\" returns successfully" Jan 20 06:55:14.886453 containerd[1680]: time="2026-01-20T06:55:14.886235120Z" level=info msg="StartContainer for \"08f8aabea7f5a8ca1c5f510d7a3e97b98d1996dd071d44debc2e0b5f63485364\" returns successfully" Jan 20 06:55:14.893793 containerd[1680]: time="2026-01-20T06:55:14.893756967Z" level=info msg="StartContainer for \"b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75\" returns successfully" Jan 20 06:55:15.101453 kubelet[2525]: E0120 06:55:15.101351 2525 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585-0-0-n-f719bce5cf\" not found" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:15.107071 kubelet[2525]: E0120 06:55:15.107049 2525 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585-0-0-n-f719bce5cf\" not found" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:15.107968 kubelet[2525]: E0120 06:55:15.107869 2525 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585-0-0-n-f719bce5cf\" not found" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:15.645973 kubelet[2525]: I0120 06:55:15.644973 2525 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:16.109211 kubelet[2525]: E0120 06:55:16.109095 2525 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585-0-0-n-f719bce5cf\" not found" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:16.109841 kubelet[2525]: E0120 06:55:16.109713 2525 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585-0-0-n-f719bce5cf\" not found" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:16.833154 kubelet[2525]: E0120 06:55:16.833095 2525 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4585-0-0-n-f719bce5cf\" not found" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:16.893173 kubelet[2525]: I0120 06:55:16.893138 2525 kubelet_node_status.go:78] "Successfully registered node" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:16.955633 kubelet[2525]: I0120 06:55:16.955595 2525 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:16.965812 kubelet[2525]: E0120 06:55:16.965595 2525 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4585-0-0-n-f719bce5cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:16.965812 kubelet[2525]: I0120 06:55:16.965624 2525 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:16.969207 kubelet[2525]: E0120 06:55:16.969158 2525 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4585-0-0-n-f719bce5cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:16.969207 kubelet[2525]: I0120 06:55:16.969182 2525 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:16.972849 kubelet[2525]: E0120 06:55:16.972610 2525 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4585-0-0-n-f719bce5cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:17.034277 kubelet[2525]: I0120 06:55:17.034233 2525 apiserver.go:52] "Watching apiserver" Jan 20 06:55:17.056353 kubelet[2525]: I0120 06:55:17.056311 2525 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 06:55:17.153191 kubelet[2525]: I0120 06:55:17.153169 2525 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:17.155381 kubelet[2525]: E0120 06:55:17.155347 2525 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4585-0-0-n-f719bce5cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:18.176208 update_engine[1661]: I20260120 06:55:18.176048 1661 update_attempter.cc:509] Updating boot flags... Jan 20 06:55:18.938775 systemd[1]: Reload requested from client PID 2809 ('systemctl') (unit session-10.scope)... Jan 20 06:55:18.938794 systemd[1]: Reloading... Jan 20 06:55:19.034900 zram_generator::config[2851]: No configuration found. Jan 20 06:55:19.256468 systemd[1]: Reloading finished in 317 ms. Jan 20 06:55:19.289607 kubelet[2525]: I0120 06:55:19.289555 2525 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 06:55:19.289591 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:55:19.301175 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 06:55:19.301450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:55:19.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:19.302147 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 20 06:55:19.302186 kernel: audit: type=1131 audit(1768892119.300:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:19.305550 systemd[1]: kubelet.service: Consumed 1.035s CPU time, 132.4M memory peak. Jan 20 06:55:19.308089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:55:19.308000 audit: BPF prog-id=113 op=LOAD Jan 20 06:55:19.313031 kernel: audit: type=1334 audit(1768892119.308:399): prog-id=113 op=LOAD Jan 20 06:55:19.313090 kernel: audit: type=1334 audit(1768892119.308:400): prog-id=72 op=UNLOAD Jan 20 06:55:19.308000 audit: BPF prog-id=72 op=UNLOAD Jan 20 06:55:19.314954 kernel: audit: type=1334 audit(1768892119.308:401): prog-id=114 op=LOAD Jan 20 06:55:19.308000 audit: BPF prog-id=114 op=LOAD Jan 20 06:55:19.317863 kernel: audit: type=1334 audit(1768892119.308:402): prog-id=115 op=LOAD Jan 20 06:55:19.308000 audit: BPF prog-id=115 op=LOAD Jan 20 06:55:19.308000 audit: BPF prog-id=73 op=UNLOAD Jan 20 06:55:19.308000 audit: BPF prog-id=74 op=UNLOAD Jan 20 06:55:19.320315 kernel: audit: type=1334 audit(1768892119.308:403): prog-id=73 op=UNLOAD Jan 20 06:55:19.320348 kernel: audit: type=1334 audit(1768892119.308:404): prog-id=74 op=UNLOAD Jan 20 06:55:19.320370 kernel: audit: type=1334 audit(1768892119.309:405): prog-id=116 op=LOAD Jan 20 06:55:19.309000 audit: BPF prog-id=116 op=LOAD Jan 20 06:55:19.309000 audit: BPF prog-id=82 op=UNLOAD Jan 20 06:55:19.321896 kernel: audit: type=1334 audit(1768892119.309:406): prog-id=82 op=UNLOAD Jan 20 06:55:19.309000 audit: BPF prog-id=117 op=LOAD Jan 20 06:55:19.323337 kernel: audit: type=1334 audit(1768892119.309:407): prog-id=117 op=LOAD Jan 20 06:55:19.309000 audit: BPF prog-id=78 op=UNLOAD Jan 20 06:55:19.310000 audit: BPF prog-id=118 op=LOAD Jan 20 06:55:19.310000 audit: BPF prog-id=79 op=UNLOAD Jan 20 06:55:19.310000 audit: BPF prog-id=119 op=LOAD Jan 20 06:55:19.310000 audit: BPF prog-id=120 op=LOAD Jan 20 06:55:19.310000 audit: BPF prog-id=80 op=UNLOAD Jan 20 06:55:19.310000 audit: BPF prog-id=81 op=UNLOAD Jan 20 06:55:19.311000 audit: BPF prog-id=121 op=LOAD Jan 20 06:55:19.311000 audit: BPF prog-id=75 op=UNLOAD Jan 20 06:55:19.311000 audit: BPF prog-id=122 op=LOAD Jan 20 06:55:19.311000 audit: BPF prog-id=123 op=LOAD Jan 20 06:55:19.311000 audit: BPF prog-id=76 op=UNLOAD Jan 20 06:55:19.311000 audit: BPF prog-id=77 op=UNLOAD Jan 20 06:55:19.312000 audit: BPF prog-id=124 op=LOAD Jan 20 06:55:19.312000 audit: BPF prog-id=69 op=UNLOAD Jan 20 06:55:19.312000 audit: BPF prog-id=125 op=LOAD Jan 20 06:55:19.312000 audit: BPF prog-id=126 op=LOAD Jan 20 06:55:19.312000 audit: BPF prog-id=70 op=UNLOAD Jan 20 06:55:19.312000 audit: BPF prog-id=71 op=UNLOAD Jan 20 06:55:19.312000 audit: BPF prog-id=127 op=LOAD Jan 20 06:55:19.312000 audit: BPF prog-id=128 op=LOAD Jan 20 06:55:19.312000 audit: BPF prog-id=64 op=UNLOAD Jan 20 06:55:19.312000 audit: BPF prog-id=65 op=UNLOAD Jan 20 06:55:19.313000 audit: BPF prog-id=129 op=LOAD Jan 20 06:55:19.313000 audit: BPF prog-id=63 op=UNLOAD Jan 20 06:55:19.314000 audit: BPF prog-id=130 op=LOAD Jan 20 06:55:19.314000 audit: BPF prog-id=66 op=UNLOAD Jan 20 06:55:19.314000 audit: BPF prog-id=131 op=LOAD Jan 20 06:55:19.314000 audit: BPF prog-id=132 op=LOAD Jan 20 06:55:19.314000 audit: BPF prog-id=67 op=UNLOAD Jan 20 06:55:19.314000 audit: BPF prog-id=68 op=UNLOAD Jan 20 06:55:21.070225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:55:21.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:21.083209 (kubelet)[2906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 06:55:21.138137 kubelet[2906]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:55:21.138137 kubelet[2906]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 06:55:21.138137 kubelet[2906]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:55:21.138137 kubelet[2906]: I0120 06:55:21.137595 2906 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 06:55:21.152695 kubelet[2906]: I0120 06:55:21.152655 2906 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 06:55:21.152892 kubelet[2906]: I0120 06:55:21.152885 2906 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 06:55:21.153236 kubelet[2906]: I0120 06:55:21.153226 2906 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 06:55:21.154497 kubelet[2906]: I0120 06:55:21.154478 2906 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 06:55:21.158550 kubelet[2906]: I0120 06:55:21.158513 2906 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 06:55:21.163931 kubelet[2906]: I0120 06:55:21.162296 2906 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 06:55:21.165416 kubelet[2906]: I0120 06:55:21.165399 2906 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 06:55:21.165702 kubelet[2906]: I0120 06:55:21.165674 2906 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 06:55:21.165954 kubelet[2906]: I0120 06:55:21.165758 2906 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4585-0-0-n-f719bce5cf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 06:55:21.166071 kubelet[2906]: I0120 06:55:21.166063 2906 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 06:55:21.166144 kubelet[2906]: I0120 06:55:21.166138 2906 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 06:55:21.166220 kubelet[2906]: I0120 06:55:21.166214 2906 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:55:21.166457 kubelet[2906]: I0120 06:55:21.166447 2906 kubelet.go:446] "Attempting to sync node with API server" Jan 20 06:55:21.166519 kubelet[2906]: I0120 06:55:21.166514 2906 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 06:55:21.166568 kubelet[2906]: I0120 06:55:21.166563 2906 kubelet.go:352] "Adding apiserver pod source" Jan 20 06:55:21.166608 kubelet[2906]: I0120 06:55:21.166603 2906 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 06:55:21.168944 kubelet[2906]: I0120 06:55:21.168927 2906 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 06:55:21.169303 kubelet[2906]: I0120 06:55:21.169293 2906 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 06:55:21.170092 kubelet[2906]: I0120 06:55:21.170074 2906 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 06:55:21.170138 kubelet[2906]: I0120 06:55:21.170118 2906 server.go:1287] "Started kubelet" Jan 20 06:55:21.175269 kubelet[2906]: I0120 06:55:21.175244 2906 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 06:55:21.182593 kubelet[2906]: I0120 06:55:21.182167 2906 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 06:55:21.183164 kubelet[2906]: I0120 06:55:21.183147 2906 server.go:479] "Adding debug handlers to kubelet server" Jan 20 06:55:21.186341 kubelet[2906]: I0120 06:55:21.186285 2906 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 06:55:21.186503 kubelet[2906]: I0120 06:55:21.186493 2906 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 06:55:21.186761 kubelet[2906]: I0120 06:55:21.186747 2906 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 06:55:21.188431 kubelet[2906]: E0120 06:55:21.188413 2906 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 06:55:21.193388 kubelet[2906]: I0120 06:55:21.193355 2906 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 06:55:21.193844 kubelet[2906]: I0120 06:55:21.193833 2906 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 06:55:21.194296 kubelet[2906]: I0120 06:55:21.194287 2906 reconciler.go:26] "Reconciler: start to sync state" Jan 20 06:55:21.197586 kubelet[2906]: I0120 06:55:21.197565 2906 factory.go:221] Registration of the containerd container factory successfully Jan 20 06:55:21.197912 kubelet[2906]: I0120 06:55:21.197866 2906 factory.go:221] Registration of the systemd container factory successfully Jan 20 06:55:21.198226 kubelet[2906]: I0120 06:55:21.198200 2906 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 06:55:21.207602 kubelet[2906]: I0120 06:55:21.207531 2906 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 06:55:21.208631 kubelet[2906]: I0120 06:55:21.208603 2906 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 06:55:21.208713 kubelet[2906]: I0120 06:55:21.208637 2906 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 06:55:21.208713 kubelet[2906]: I0120 06:55:21.208658 2906 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 06:55:21.208713 kubelet[2906]: I0120 06:55:21.208665 2906 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 06:55:21.208840 kubelet[2906]: E0120 06:55:21.208711 2906 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 06:55:21.280148 kubelet[2906]: I0120 06:55:21.280053 2906 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 06:55:21.280674 kubelet[2906]: I0120 06:55:21.280287 2906 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 06:55:21.280674 kubelet[2906]: I0120 06:55:21.280405 2906 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:55:21.281122 kubelet[2906]: I0120 06:55:21.281066 2906 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 06:55:21.281122 kubelet[2906]: I0120 06:55:21.281081 2906 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 06:55:21.281122 kubelet[2906]: I0120 06:55:21.281101 2906 policy_none.go:49] "None policy: Start" Jan 20 06:55:21.281305 kubelet[2906]: I0120 06:55:21.281210 2906 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 06:55:21.281305 kubelet[2906]: I0120 06:55:21.281223 2906 state_mem.go:35] "Initializing new in-memory state store" Jan 20 06:55:21.281507 kubelet[2906]: I0120 06:55:21.281457 2906 state_mem.go:75] "Updated machine memory state" Jan 20 06:55:21.290398 kubelet[2906]: I0120 06:55:21.290266 2906 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 06:55:21.292000 kubelet[2906]: I0120 06:55:21.291983 2906 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 06:55:21.292123 kubelet[2906]: I0120 06:55:21.292093 2906 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 06:55:21.293311 kubelet[2906]: I0120 06:55:21.292361 2906 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 06:55:21.294924 kubelet[2906]: E0120 06:55:21.293648 2906 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 06:55:21.310652 kubelet[2906]: I0120 06:55:21.310613 2906 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.312088 kubelet[2906]: I0120 06:55:21.312070 2906 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.312794 kubelet[2906]: I0120 06:55:21.312782 2906 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.396556 kubelet[2906]: I0120 06:55:21.396195 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ec33335f51937264bbd555aab57e315-ca-certs\") pod \"kube-controller-manager-ci-4585-0-0-n-f719bce5cf\" (UID: \"0ec33335f51937264bbd555aab57e315\") " pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.396556 kubelet[2906]: I0120 06:55:21.396477 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ec33335f51937264bbd555aab57e315-k8s-certs\") pod \"kube-controller-manager-ci-4585-0-0-n-f719bce5cf\" (UID: \"0ec33335f51937264bbd555aab57e315\") " pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.397027 kubelet[2906]: I0120 06:55:21.396849 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e48656fdbb62acc79d75ce1d010b7764-kubeconfig\") pod \"kube-scheduler-ci-4585-0-0-n-f719bce5cf\" (UID: \"e48656fdbb62acc79d75ce1d010b7764\") " pod="kube-system/kube-scheduler-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.397027 kubelet[2906]: I0120 06:55:21.396878 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e4b701fb4a4e79f892a204f5b24f14a-k8s-certs\") pod \"kube-apiserver-ci-4585-0-0-n-f719bce5cf\" (UID: \"6e4b701fb4a4e79f892a204f5b24f14a\") " pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.397027 kubelet[2906]: I0120 06:55:21.396894 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ec33335f51937264bbd555aab57e315-flexvolume-dir\") pod \"kube-controller-manager-ci-4585-0-0-n-f719bce5cf\" (UID: \"0ec33335f51937264bbd555aab57e315\") " pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.397027 kubelet[2906]: I0120 06:55:21.396931 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ec33335f51937264bbd555aab57e315-kubeconfig\") pod \"kube-controller-manager-ci-4585-0-0-n-f719bce5cf\" (UID: \"0ec33335f51937264bbd555aab57e315\") " pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.397027 kubelet[2906]: I0120 06:55:21.396949 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ec33335f51937264bbd555aab57e315-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4585-0-0-n-f719bce5cf\" (UID: \"0ec33335f51937264bbd555aab57e315\") " pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.397179 kubelet[2906]: I0120 06:55:21.396964 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e4b701fb4a4e79f892a204f5b24f14a-ca-certs\") pod \"kube-apiserver-ci-4585-0-0-n-f719bce5cf\" (UID: \"6e4b701fb4a4e79f892a204f5b24f14a\") " pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.397179 kubelet[2906]: I0120 06:55:21.397002 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e4b701fb4a4e79f892a204f5b24f14a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4585-0-0-n-f719bce5cf\" (UID: \"6e4b701fb4a4e79f892a204f5b24f14a\") " pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.408641 kubelet[2906]: I0120 06:55:21.407890 2906 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.416473 kubelet[2906]: I0120 06:55:21.416326 2906 kubelet_node_status.go:124] "Node was previously registered" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:21.416473 kubelet[2906]: I0120 06:55:21.416423 2906 kubelet_node_status.go:78] "Successfully registered node" node="ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:22.179708 kubelet[2906]: I0120 06:55:22.178190 2906 apiserver.go:52] "Watching apiserver" Jan 20 06:55:22.194980 kubelet[2906]: I0120 06:55:22.194920 2906 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 06:55:22.225224 kubelet[2906]: I0120 06:55:22.225135 2906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4585-0-0-n-f719bce5cf" podStartSLOduration=1.225117455 podStartE2EDuration="1.225117455s" podCreationTimestamp="2026-01-20 06:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:55:22.224391941 +0000 UTC m=+1.136029718" watchObservedRunningTime="2026-01-20 06:55:22.225117455 +0000 UTC m=+1.136755228" Jan 20 06:55:22.254170 kubelet[2906]: I0120 06:55:22.254142 2906 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:22.263721 kubelet[2906]: I0120 06:55:22.263657 2906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" podStartSLOduration=1.26363398 podStartE2EDuration="1.26363398s" podCreationTimestamp="2026-01-20 06:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:55:22.245226545 +0000 UTC m=+1.156864326" watchObservedRunningTime="2026-01-20 06:55:22.26363398 +0000 UTC m=+1.175271745" Jan 20 06:55:22.264859 kubelet[2906]: E0120 06:55:22.264812 2906 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4585-0-0-n-f719bce5cf\" already exists" pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" Jan 20 06:55:22.279547 kubelet[2906]: I0120 06:55:22.279472 2906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4585-0-0-n-f719bce5cf" podStartSLOduration=1.279452893 podStartE2EDuration="1.279452893s" podCreationTimestamp="2026-01-20 06:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:55:22.263945513 +0000 UTC m=+1.175583294" watchObservedRunningTime="2026-01-20 06:55:22.279452893 +0000 UTC m=+1.191090675" Jan 20 06:55:25.856019 kubelet[2906]: I0120 06:55:25.855980 2906 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 06:55:25.856633 containerd[1680]: time="2026-01-20T06:55:25.856503611Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 06:55:25.856929 kubelet[2906]: I0120 06:55:25.856680 2906 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 06:55:26.912701 systemd[1]: Created slice kubepods-besteffort-pod46ac5655_220a_416c_9d67_a9dee538d76c.slice - libcontainer container kubepods-besteffort-pod46ac5655_220a_416c_9d67_a9dee538d76c.slice. Jan 20 06:55:26.935399 kubelet[2906]: I0120 06:55:26.935357 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46ac5655-220a-416c-9d67-a9dee538d76c-kube-proxy\") pod \"kube-proxy-2698k\" (UID: \"46ac5655-220a-416c-9d67-a9dee538d76c\") " pod="kube-system/kube-proxy-2698k" Jan 20 06:55:26.936699 kubelet[2906]: I0120 06:55:26.936683 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46ac5655-220a-416c-9d67-a9dee538d76c-lib-modules\") pod \"kube-proxy-2698k\" (UID: \"46ac5655-220a-416c-9d67-a9dee538d76c\") " pod="kube-system/kube-proxy-2698k" Jan 20 06:55:26.936840 kubelet[2906]: I0120 06:55:26.936788 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqzs4\" (UniqueName: \"kubernetes.io/projected/46ac5655-220a-416c-9d67-a9dee538d76c-kube-api-access-bqzs4\") pod \"kube-proxy-2698k\" (UID: \"46ac5655-220a-416c-9d67-a9dee538d76c\") " pod="kube-system/kube-proxy-2698k" Jan 20 06:55:26.936840 kubelet[2906]: I0120 06:55:26.936813 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46ac5655-220a-416c-9d67-a9dee538d76c-xtables-lock\") pod \"kube-proxy-2698k\" (UID: \"46ac5655-220a-416c-9d67-a9dee538d76c\") " pod="kube-system/kube-proxy-2698k" Jan 20 06:55:26.983945 systemd[1]: Created slice kubepods-besteffort-podc6825465_f405_4175_9378_73f98628e5ce.slice - libcontainer container kubepods-besteffort-podc6825465_f405_4175_9378_73f98628e5ce.slice. Jan 20 06:55:27.038067 kubelet[2906]: I0120 06:55:27.037566 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c6825465-f405-4175-9378-73f98628e5ce-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xdm6q\" (UID: \"c6825465-f405-4175-9378-73f98628e5ce\") " pod="tigera-operator/tigera-operator-7dcd859c48-xdm6q" Jan 20 06:55:27.038067 kubelet[2906]: I0120 06:55:27.037599 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2t99\" (UniqueName: \"kubernetes.io/projected/c6825465-f405-4175-9378-73f98628e5ce-kube-api-access-c2t99\") pod \"tigera-operator-7dcd859c48-xdm6q\" (UID: \"c6825465-f405-4175-9378-73f98628e5ce\") " pod="tigera-operator/tigera-operator-7dcd859c48-xdm6q" Jan 20 06:55:27.221793 containerd[1680]: time="2026-01-20T06:55:27.221703009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2698k,Uid:46ac5655-220a-416c-9d67-a9dee538d76c,Namespace:kube-system,Attempt:0,}" Jan 20 06:55:27.255469 containerd[1680]: time="2026-01-20T06:55:27.255155791Z" level=info msg="connecting to shim bd9a0a03fa97018adbacfd0b0a17e5dcd3e503ecaccef3364e999f8ea6f65cea" address="unix:///run/containerd/s/03ccf8124c8673daaef60ac4e127f935cf2a5536860dd1e22faf29a3317c86e3" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:55:27.283051 systemd[1]: Started cri-containerd-bd9a0a03fa97018adbacfd0b0a17e5dcd3e503ecaccef3364e999f8ea6f65cea.scope - libcontainer container bd9a0a03fa97018adbacfd0b0a17e5dcd3e503ecaccef3364e999f8ea6f65cea. Jan 20 06:55:27.288481 containerd[1680]: time="2026-01-20T06:55:27.288271691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xdm6q,Uid:c6825465-f405-4175-9378-73f98628e5ce,Namespace:tigera-operator,Attempt:0,}" Jan 20 06:55:27.297923 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 06:55:27.298011 kernel: audit: type=1334 audit(1768892127.293:440): prog-id=133 op=LOAD Jan 20 06:55:27.293000 audit: BPF prog-id=133 op=LOAD Jan 20 06:55:27.293000 audit: BPF prog-id=134 op=LOAD Jan 20 06:55:27.293000 audit[2973]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2961 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.302929 kernel: audit: type=1334 audit(1768892127.293:441): prog-id=134 op=LOAD Jan 20 06:55:27.302965 kernel: audit: type=1300 audit(1768892127.293:441): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2961 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264396130613033666139373031386164626163666430623061313765 Jan 20 06:55:27.293000 audit: BPF prog-id=134 op=UNLOAD Jan 20 06:55:27.312315 kernel: audit: type=1327 audit(1768892127.293:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264396130613033666139373031386164626163666430623061313765 Jan 20 06:55:27.312377 kernel: audit: type=1334 audit(1768892127.293:442): prog-id=134 op=UNLOAD Jan 20 06:55:27.293000 audit[2973]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.314170 kernel: audit: type=1300 audit(1768892127.293:442): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264396130613033666139373031386164626163666430623061313765 Jan 20 06:55:27.293000 audit: BPF prog-id=135 op=LOAD Jan 20 06:55:27.325206 kernel: audit: type=1327 audit(1768892127.293:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264396130613033666139373031386164626163666430623061313765 Jan 20 06:55:27.325253 kernel: audit: type=1334 audit(1768892127.293:443): prog-id=135 op=LOAD Jan 20 06:55:27.293000 audit[2973]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2961 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.327524 kernel: audit: type=1300 audit(1768892127.293:443): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2961 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264396130613033666139373031386164626163666430623061313765 Jan 20 06:55:27.331793 kernel: audit: type=1327 audit(1768892127.293:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264396130613033666139373031386164626163666430623061313765 Jan 20 06:55:27.334190 containerd[1680]: time="2026-01-20T06:55:27.334142181Z" level=info msg="connecting to shim 27339f3b8e78a1e8399a16e975022504f4d4ac36510d4e4116b0d2a4899911f6" address="unix:///run/containerd/s/edf7739ddb6af22edc05d8812341e154e0c0be971e12c54bf4dcb548a0559dae" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:55:27.293000 audit: BPF prog-id=136 op=LOAD Jan 20 06:55:27.293000 audit[2973]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2961 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264396130613033666139373031386164626163666430623061313765 Jan 20 06:55:27.293000 audit: BPF prog-id=136 op=UNLOAD Jan 20 06:55:27.293000 audit[2973]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264396130613033666139373031386164626163666430623061313765 Jan 20 06:55:27.293000 audit: BPF prog-id=135 op=UNLOAD Jan 20 06:55:27.293000 audit[2973]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264396130613033666139373031386164626163666430623061313765 Jan 20 06:55:27.295000 audit: BPF prog-id=137 op=LOAD Jan 20 06:55:27.295000 audit[2973]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2961 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264396130613033666139373031386164626163666430623061313765 Jan 20 06:55:27.338724 containerd[1680]: time="2026-01-20T06:55:27.338428269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2698k,Uid:46ac5655-220a-416c-9d67-a9dee538d76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd9a0a03fa97018adbacfd0b0a17e5dcd3e503ecaccef3364e999f8ea6f65cea\"" Jan 20 06:55:27.341651 containerd[1680]: time="2026-01-20T06:55:27.341610802Z" level=info msg="CreateContainer within sandbox \"bd9a0a03fa97018adbacfd0b0a17e5dcd3e503ecaccef3364e999f8ea6f65cea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 06:55:27.354371 containerd[1680]: time="2026-01-20T06:55:27.354341818Z" level=info msg="Container e9fa7ce628608af68d12726f29666a2172a9dfaf35a58545db29340ed02ffdd7: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:55:27.357024 systemd[1]: Started cri-containerd-27339f3b8e78a1e8399a16e975022504f4d4ac36510d4e4116b0d2a4899911f6.scope - libcontainer container 27339f3b8e78a1e8399a16e975022504f4d4ac36510d4e4116b0d2a4899911f6. Jan 20 06:55:27.364864 containerd[1680]: time="2026-01-20T06:55:27.364452164Z" level=info msg="CreateContainer within sandbox \"bd9a0a03fa97018adbacfd0b0a17e5dcd3e503ecaccef3364e999f8ea6f65cea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e9fa7ce628608af68d12726f29666a2172a9dfaf35a58545db29340ed02ffdd7\"" Jan 20 06:55:27.366857 containerd[1680]: time="2026-01-20T06:55:27.366025677Z" level=info msg="StartContainer for \"e9fa7ce628608af68d12726f29666a2172a9dfaf35a58545db29340ed02ffdd7\"" Jan 20 06:55:27.368185 containerd[1680]: time="2026-01-20T06:55:27.368163511Z" level=info msg="connecting to shim e9fa7ce628608af68d12726f29666a2172a9dfaf35a58545db29340ed02ffdd7" address="unix:///run/containerd/s/03ccf8124c8673daaef60ac4e127f935cf2a5536860dd1e22faf29a3317c86e3" protocol=ttrpc version=3 Jan 20 06:55:27.371000 audit: BPF prog-id=138 op=LOAD Jan 20 06:55:27.373000 audit: BPF prog-id=139 op=LOAD Jan 20 06:55:27.373000 audit[3019]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333339663362386537386131653833393961313665393735303232 Jan 20 06:55:27.373000 audit: BPF prog-id=139 op=UNLOAD Jan 20 06:55:27.373000 audit[3019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333339663362386537386131653833393961313665393735303232 Jan 20 06:55:27.373000 audit: BPF prog-id=140 op=LOAD Jan 20 06:55:27.373000 audit[3019]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333339663362386537386131653833393961313665393735303232 Jan 20 06:55:27.373000 audit: BPF prog-id=141 op=LOAD Jan 20 06:55:27.373000 audit[3019]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333339663362386537386131653833393961313665393735303232 Jan 20 06:55:27.373000 audit: BPF prog-id=141 op=UNLOAD Jan 20 06:55:27.373000 audit[3019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333339663362386537386131653833393961313665393735303232 Jan 20 06:55:27.373000 audit: BPF prog-id=140 op=UNLOAD Jan 20 06:55:27.373000 audit[3019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333339663362386537386131653833393961313665393735303232 Jan 20 06:55:27.373000 audit: BPF prog-id=142 op=LOAD Jan 20 06:55:27.373000 audit[3019]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333339663362386537386131653833393961313665393735303232 Jan 20 06:55:27.391005 systemd[1]: Started cri-containerd-e9fa7ce628608af68d12726f29666a2172a9dfaf35a58545db29340ed02ffdd7.scope - libcontainer container e9fa7ce628608af68d12726f29666a2172a9dfaf35a58545db29340ed02ffdd7. Jan 20 06:55:27.425004 containerd[1680]: time="2026-01-20T06:55:27.424901968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xdm6q,Uid:c6825465-f405-4175-9378-73f98628e5ce,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"27339f3b8e78a1e8399a16e975022504f4d4ac36510d4e4116b0d2a4899911f6\"" Jan 20 06:55:27.423000 audit: BPF prog-id=143 op=LOAD Jan 20 06:55:27.423000 audit[3039]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2961 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539666137636536323836303861663638643132373236663239363636 Jan 20 06:55:27.423000 audit: BPF prog-id=144 op=LOAD Jan 20 06:55:27.423000 audit[3039]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2961 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539666137636536323836303861663638643132373236663239363636 Jan 20 06:55:27.423000 audit: BPF prog-id=144 op=UNLOAD Jan 20 06:55:27.423000 audit[3039]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539666137636536323836303861663638643132373236663239363636 Jan 20 06:55:27.424000 audit: BPF prog-id=143 op=UNLOAD Jan 20 06:55:27.424000 audit[3039]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539666137636536323836303861663638643132373236663239363636 Jan 20 06:55:27.424000 audit: BPF prog-id=145 op=LOAD Jan 20 06:55:27.424000 audit[3039]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2961 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539666137636536323836303861663638643132373236663239363636 Jan 20 06:55:27.427853 containerd[1680]: time="2026-01-20T06:55:27.427805464Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 06:55:27.451207 containerd[1680]: time="2026-01-20T06:55:27.451177296Z" level=info msg="StartContainer for \"e9fa7ce628608af68d12726f29666a2172a9dfaf35a58545db29340ed02ffdd7\" returns successfully" Jan 20 06:55:27.553000 audit[3106]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.553000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec0e2cb60 a2=0 a3=7ffec0e2cb4c items=0 ppid=3052 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 06:55:27.554000 audit[3107]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.554000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc47a3540 a2=0 a3=7fffc47a352c items=0 ppid=3052 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.554000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 06:55:27.555000 audit[3108]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.555000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4deb38c0 a2=0 a3=7ffd4deb38ac items=0 ppid=3052 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 06:55:27.556000 audit[3109]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.556000 audit[3110]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.556000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3b4eb6d0 a2=0 a3=7ffe3b4eb6bc items=0 ppid=3052 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 06:55:27.556000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6a595740 a2=0 a3=7ffe6a59572c items=0 ppid=3052 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.556000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 06:55:27.559000 audit[3112]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.559000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc597a5680 a2=0 a3=7ffc597a566c items=0 ppid=3052 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.559000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 06:55:27.659000 audit[3113]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.659000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdf3acab40 a2=0 a3=7ffdf3acab2c items=0 ppid=3052 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 06:55:27.662000 audit[3115]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.662000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc5109a040 a2=0 a3=7ffc5109a02c items=0 ppid=3052 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.662000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 20 06:55:27.666000 audit[3118]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.666000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe7e1837a0 a2=0 a3=7ffe7e18378c items=0 ppid=3052 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 20 06:55:27.667000 audit[3119]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.667000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef67de600 a2=0 a3=7ffef67de5ec items=0 ppid=3052 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 06:55:27.670000 audit[3121]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.670000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc994b05e0 a2=0 a3=7ffc994b05cc items=0 ppid=3052 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.670000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 06:55:27.671000 audit[3122]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.671000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3315b800 a2=0 a3=7ffc3315b7ec items=0 ppid=3052 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 06:55:27.674000 audit[3124]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.674000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcb2dd8f00 a2=0 a3=7ffcb2dd8eec items=0 ppid=3052 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.674000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 06:55:27.677000 audit[3127]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.677000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd3557b2f0 a2=0 a3=7ffd3557b2dc items=0 ppid=3052 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 20 06:55:27.678000 audit[3128]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.678000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe73f82f10 a2=0 a3=7ffe73f82efc items=0 ppid=3052 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 06:55:27.681000 audit[3130]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.681000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd77b9f450 a2=0 a3=7ffd77b9f43c items=0 ppid=3052 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 06:55:27.682000 audit[3131]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.682000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8938cda0 a2=0 a3=7ffd8938cd8c items=0 ppid=3052 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 06:55:27.684000 audit[3133]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.684000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8438ef30 a2=0 a3=7ffe8438ef1c items=0 ppid=3052 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 06:55:27.688000 audit[3136]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.688000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc6621af40 a2=0 a3=7ffc6621af2c items=0 ppid=3052 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.688000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 06:55:27.692000 audit[3139]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.692000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9f61e500 a2=0 a3=7fff9f61e4ec items=0 ppid=3052 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 06:55:27.693000 audit[3140]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.693000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffef01668c0 a2=0 a3=7ffef01668ac items=0 ppid=3052 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 06:55:27.695000 audit[3142]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.695000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd12e5d2b0 a2=0 a3=7ffd12e5d29c items=0 ppid=3052 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:55:27.699000 audit[3145]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.699000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa6acd6d0 a2=0 a3=7fffa6acd6bc items=0 ppid=3052 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:55:27.700000 audit[3146]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.700000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd257f6830 a2=0 a3=7ffd257f681c items=0 ppid=3052 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.700000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 06:55:27.702000 audit[3148]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:55:27.702000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffd6a8a2e0 a2=0 a3=7fffd6a8a2cc items=0 ppid=3052 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 06:55:27.728000 audit[3154]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:27.728000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd127e5090 a2=0 a3=7ffd127e507c items=0 ppid=3052 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.728000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:27.735000 audit[3154]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:27.735000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd127e5090 a2=0 a3=7ffd127e507c items=0 ppid=3052 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:27.737000 audit[3159]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.737000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffed3d5f0f0 a2=0 a3=7ffed3d5f0dc items=0 ppid=3052 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.737000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 06:55:27.740000 audit[3161]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.740000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff92603d50 a2=0 a3=7fff92603d3c items=0 ppid=3052 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.740000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 20 06:55:27.744000 audit[3164]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.744000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc8fdc2b80 a2=0 a3=7ffc8fdc2b6c items=0 ppid=3052 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.744000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 20 06:55:27.745000 audit[3165]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.745000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb30c0430 a2=0 a3=7ffdb30c041c items=0 ppid=3052 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.745000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 06:55:27.748000 audit[3167]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.748000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff87d2e5b0 a2=0 a3=7fff87d2e59c items=0 ppid=3052 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 06:55:27.749000 audit[3168]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.749000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde4e9b430 a2=0 a3=7ffde4e9b41c items=0 ppid=3052 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.749000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 06:55:27.752000 audit[3170]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.752000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc744c7b90 a2=0 a3=7ffc744c7b7c items=0 ppid=3052 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.752000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 20 06:55:27.755000 audit[3173]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.755000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcaa280030 a2=0 a3=7ffcaa28001c items=0 ppid=3052 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.755000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 06:55:27.756000 audit[3174]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.756000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe8eda3d0 a2=0 a3=7fffe8eda3bc items=0 ppid=3052 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 06:55:27.759000 audit[3176]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.759000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf3e37cd0 a2=0 a3=7ffcf3e37cbc items=0 ppid=3052 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.759000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 06:55:27.760000 audit[3177]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.760000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc01cceaa0 a2=0 a3=7ffc01ccea8c items=0 ppid=3052 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 06:55:27.762000 audit[3179]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.762000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd9e9c9e30 a2=0 a3=7ffd9e9c9e1c items=0 ppid=3052 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.762000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 06:55:27.766000 audit[3182]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.766000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff8c37e470 a2=0 a3=7fff8c37e45c items=0 ppid=3052 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.766000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 06:55:27.770000 audit[3185]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.770000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd741f300 a2=0 a3=7fffd741f2ec items=0 ppid=3052 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 20 06:55:27.771000 audit[3186]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.771000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc6abed420 a2=0 a3=7ffc6abed40c items=0 ppid=3052 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.771000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 06:55:27.773000 audit[3188]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.773000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd297c3640 a2=0 a3=7ffd297c362c items=0 ppid=3052 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:55:27.778000 audit[3191]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.778000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd5a499f60 a2=0 a3=7ffd5a499f4c items=0 ppid=3052 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:55:27.779000 audit[3192]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.779000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5ff01010 a2=0 a3=7fff5ff00ffc items=0 ppid=3052 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 06:55:27.783000 audit[3194]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.783000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffed9e3bbe0 a2=0 a3=7ffed9e3bbcc items=0 ppid=3052 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.783000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 06:55:27.784000 audit[3195]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.784000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee0555d90 a2=0 a3=7ffee0555d7c items=0 ppid=3052 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 06:55:27.787000 audit[3197]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.787000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf9de1140 a2=0 a3=7ffcf9de112c items=0 ppid=3052 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:55:27.791000 audit[3200]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:55:27.791000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc97f4ac70 a2=0 a3=7ffc97f4ac5c items=0 ppid=3052 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:55:27.795000 audit[3202]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 06:55:27.795000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe14565ff0 a2=0 a3=7ffe14565fdc items=0 ppid=3052 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.795000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:27.796000 audit[3202]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 06:55:27.796000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe14565ff0 a2=0 a3=7ffe14565fdc items=0 ppid=3052 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:27.796000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:29.184871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2023472359.mount: Deactivated successfully. Jan 20 06:55:29.443338 kubelet[2906]: I0120 06:55:29.442746 2906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2698k" podStartSLOduration=3.442730205 podStartE2EDuration="3.442730205s" podCreationTimestamp="2026-01-20 06:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:55:28.281665522 +0000 UTC m=+7.193303357" watchObservedRunningTime="2026-01-20 06:55:29.442730205 +0000 UTC m=+8.354367986" Jan 20 06:55:29.798846 containerd[1680]: time="2026-01-20T06:55:29.798647471Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:29.800064 containerd[1680]: time="2026-01-20T06:55:29.799999196Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 20 06:55:29.801796 containerd[1680]: time="2026-01-20T06:55:29.801734798Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:29.804338 containerd[1680]: time="2026-01-20T06:55:29.804312392Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:29.804883 containerd[1680]: time="2026-01-20T06:55:29.804862430Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.37665574s" Jan 20 06:55:29.804919 containerd[1680]: time="2026-01-20T06:55:29.804887900Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 06:55:29.809224 containerd[1680]: time="2026-01-20T06:55:29.808979549Z" level=info msg="CreateContainer within sandbox \"27339f3b8e78a1e8399a16e975022504f4d4ac36510d4e4116b0d2a4899911f6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 06:55:29.822898 containerd[1680]: time="2026-01-20T06:55:29.822870764Z" level=info msg="Container d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:55:29.829877 containerd[1680]: time="2026-01-20T06:55:29.829847620Z" level=info msg="CreateContainer within sandbox \"27339f3b8e78a1e8399a16e975022504f4d4ac36510d4e4116b0d2a4899911f6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b\"" Jan 20 06:55:29.830996 containerd[1680]: time="2026-01-20T06:55:29.830980647Z" level=info msg="StartContainer for \"d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b\"" Jan 20 06:55:29.831900 containerd[1680]: time="2026-01-20T06:55:29.831881256Z" level=info msg="connecting to shim d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b" address="unix:///run/containerd/s/edf7739ddb6af22edc05d8812341e154e0c0be971e12c54bf4dcb548a0559dae" protocol=ttrpc version=3 Jan 20 06:55:29.854051 systemd[1]: Started cri-containerd-d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b.scope - libcontainer container d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b. Jan 20 06:55:29.862000 audit: BPF prog-id=146 op=LOAD Jan 20 06:55:29.863000 audit: BPF prog-id=147 op=LOAD Jan 20 06:55:29.863000 audit[3211]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3007 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:29.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653232396266366564333466333631373630656235653735316661 Jan 20 06:55:29.863000 audit: BPF prog-id=147 op=UNLOAD Jan 20 06:55:29.863000 audit[3211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:29.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653232396266366564333466333631373630656235653735316661 Jan 20 06:55:29.863000 audit: BPF prog-id=148 op=LOAD Jan 20 06:55:29.863000 audit[3211]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3007 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:29.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653232396266366564333466333631373630656235653735316661 Jan 20 06:55:29.863000 audit: BPF prog-id=149 op=LOAD Jan 20 06:55:29.863000 audit[3211]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3007 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:29.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653232396266366564333466333631373630656235653735316661 Jan 20 06:55:29.863000 audit: BPF prog-id=149 op=UNLOAD Jan 20 06:55:29.863000 audit[3211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:29.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653232396266366564333466333631373630656235653735316661 Jan 20 06:55:29.863000 audit: BPF prog-id=148 op=UNLOAD Jan 20 06:55:29.863000 audit[3211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:29.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653232396266366564333466333631373630656235653735316661 Jan 20 06:55:29.864000 audit: BPF prog-id=150 op=LOAD Jan 20 06:55:29.864000 audit[3211]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3007 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:29.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653232396266366564333466333631373630656235653735316661 Jan 20 06:55:29.883734 containerd[1680]: time="2026-01-20T06:55:29.883702058Z" level=info msg="StartContainer for \"d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b\" returns successfully" Jan 20 06:55:30.985028 kubelet[2906]: I0120 06:55:30.984623 2906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xdm6q" podStartSLOduration=2.604551711 podStartE2EDuration="4.984605623s" podCreationTimestamp="2026-01-20 06:55:26 +0000 UTC" firstStartedPulling="2026-01-20 06:55:27.426754491 +0000 UTC m=+6.338392250" lastFinishedPulling="2026-01-20 06:55:29.806808398 +0000 UTC m=+8.718446162" observedRunningTime="2026-01-20 06:55:30.293498015 +0000 UTC m=+9.205135798" watchObservedRunningTime="2026-01-20 06:55:30.984605623 +0000 UTC m=+9.896243406" Jan 20 06:55:35.510081 sudo[1954]: pam_unix(sudo:session): session closed for user root Jan 20 06:55:35.508000 audit[1954]: USER_END pid=1954 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:55:35.511904 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 20 06:55:35.511955 kernel: audit: type=1106 audit(1768892135.508:520): pid=1954 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:55:35.508000 audit[1954]: CRED_DISP pid=1954 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:55:35.519861 kernel: audit: type=1104 audit(1768892135.508:521): pid=1954 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:55:35.609446 sshd[1953]: Connection closed by 20.161.92.111 port 34602 Jan 20 06:55:35.610448 sshd-session[1949]: pam_unix(sshd:session): session closed for user core Jan 20 06:55:35.611000 audit[1949]: USER_END pid=1949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:55:35.618448 systemd[1]: sshd@8-10.0.0.92:22-20.161.92.111:34602.service: Deactivated successfully. Jan 20 06:55:35.618852 kernel: audit: type=1106 audit(1768892135.611:522): pid=1949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:55:35.620683 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 06:55:35.620889 systemd[1]: session-10.scope: Consumed 4.946s CPU time, 230M memory peak. Jan 20 06:55:35.622542 systemd-logind[1660]: Session 10 logged out. Waiting for processes to exit. Jan 20 06:55:35.613000 audit[1949]: CRED_DISP pid=1949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:55:35.626848 kernel: audit: type=1104 audit(1768892135.613:523): pid=1949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:55:35.627055 kernel: audit: type=1131 audit(1768892135.616:524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.92:22-20.161.92.111:34602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:35.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.92:22-20.161.92.111:34602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:55:35.633015 systemd-logind[1660]: Removed session 10. Jan 20 06:55:36.207000 audit[3292]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:36.212862 kernel: audit: type=1325 audit(1768892136.207:525): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:36.212949 kernel: audit: type=1300 audit(1768892136.207:525): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcba33c790 a2=0 a3=7ffcba33c77c items=0 ppid=3052 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:36.207000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcba33c790 a2=0 a3=7ffcba33c77c items=0 ppid=3052 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:36.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:36.221847 kernel: audit: type=1327 audit(1768892136.207:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:36.218000 audit[3292]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:36.225848 kernel: audit: type=1325 audit(1768892136.218:526): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:36.225890 kernel: audit: type=1300 audit(1768892136.218:526): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcba33c790 a2=0 a3=0 items=0 ppid=3052 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:36.218000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcba33c790 a2=0 a3=0 items=0 ppid=3052 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:36.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:36.247000 audit[3294]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:36.247000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd72b6a640 a2=0 a3=7ffd72b6a62c items=0 ppid=3052 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:36.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:36.252000 audit[3294]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:36.252000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd72b6a640 a2=0 a3=0 items=0 ppid=3052 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:36.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:38.171000 audit[3296]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:38.171000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff78cd8080 a2=0 a3=7fff78cd806c items=0 ppid=3052 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:38.171000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:38.176000 audit[3296]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:38.176000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff78cd8080 a2=0 a3=0 items=0 ppid=3052 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:38.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:38.187000 audit[3298]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:38.187000 audit[3298]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd6579c180 a2=0 a3=7ffd6579c16c items=0 ppid=3052 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:38.187000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:38.192000 audit[3298]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:38.192000 audit[3298]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6579c180 a2=0 a3=0 items=0 ppid=3052 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:38.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:39.206000 audit[3300]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:39.206000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffed2b10110 a2=0 a3=7ffed2b100fc items=0 ppid=3052 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:39.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:39.210000 audit[3300]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:39.210000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed2b10110 a2=0 a3=0 items=0 ppid=3052 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:39.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:39.880670 systemd[1]: Created slice kubepods-besteffort-podb615a421_49d1_45f1_b503_d54e65d79724.slice - libcontainer container kubepods-besteffort-podb615a421_49d1_45f1_b503_d54e65d79724.slice. Jan 20 06:55:39.917311 kubelet[2906]: I0120 06:55:39.917207 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b615a421-49d1-45f1-b503-d54e65d79724-tigera-ca-bundle\") pod \"calico-typha-6f44d4f6b9-9j7rz\" (UID: \"b615a421-49d1-45f1-b503-d54e65d79724\") " pod="calico-system/calico-typha-6f44d4f6b9-9j7rz" Jan 20 06:55:39.917311 kubelet[2906]: I0120 06:55:39.917242 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b615a421-49d1-45f1-b503-d54e65d79724-typha-certs\") pod \"calico-typha-6f44d4f6b9-9j7rz\" (UID: \"b615a421-49d1-45f1-b503-d54e65d79724\") " pod="calico-system/calico-typha-6f44d4f6b9-9j7rz" Jan 20 06:55:39.917311 kubelet[2906]: I0120 06:55:39.917260 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwvrk\" (UniqueName: \"kubernetes.io/projected/b615a421-49d1-45f1-b503-d54e65d79724-kube-api-access-xwvrk\") pod \"calico-typha-6f44d4f6b9-9j7rz\" (UID: \"b615a421-49d1-45f1-b503-d54e65d79724\") " pod="calico-system/calico-typha-6f44d4f6b9-9j7rz" Jan 20 06:55:40.074550 systemd[1]: Created slice kubepods-besteffort-podcc988606_a447_472d_a665_8dd8c9776c19.slice - libcontainer container kubepods-besteffort-podcc988606_a447_472d_a665_8dd8c9776c19.slice. Jan 20 06:55:40.119198 kubelet[2906]: I0120 06:55:40.119149 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cc988606-a447-472d-a665-8dd8c9776c19-cni-net-dir\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.119450 kubelet[2906]: I0120 06:55:40.119373 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cc988606-a447-472d-a665-8dd8c9776c19-var-run-calico\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.119450 kubelet[2906]: I0120 06:55:40.119397 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cc988606-a447-472d-a665-8dd8c9776c19-var-lib-calico\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.119450 kubelet[2906]: I0120 06:55:40.119414 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t47m4\" (UniqueName: \"kubernetes.io/projected/cc988606-a447-472d-a665-8dd8c9776c19-kube-api-access-t47m4\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.119623 kubelet[2906]: I0120 06:55:40.119475 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cc988606-a447-472d-a665-8dd8c9776c19-cni-log-dir\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.119623 kubelet[2906]: I0120 06:55:40.119502 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cc988606-a447-472d-a665-8dd8c9776c19-policysync\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.119623 kubelet[2906]: I0120 06:55:40.119528 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cc988606-a447-472d-a665-8dd8c9776c19-cni-bin-dir\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.119623 kubelet[2906]: I0120 06:55:40.119560 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cc988606-a447-472d-a665-8dd8c9776c19-node-certs\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.119623 kubelet[2906]: I0120 06:55:40.119580 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc988606-a447-472d-a665-8dd8c9776c19-tigera-ca-bundle\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.119763 kubelet[2906]: I0120 06:55:40.119607 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc988606-a447-472d-a665-8dd8c9776c19-xtables-lock\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.119763 kubelet[2906]: I0120 06:55:40.119628 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc988606-a447-472d-a665-8dd8c9776c19-lib-modules\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.119763 kubelet[2906]: I0120 06:55:40.119644 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cc988606-a447-472d-a665-8dd8c9776c19-flexvol-driver-host\") pod \"calico-node-8fx7p\" (UID: \"cc988606-a447-472d-a665-8dd8c9776c19\") " pod="calico-system/calico-node-8fx7p" Jan 20 06:55:40.184558 containerd[1680]: time="2026-01-20T06:55:40.184473637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f44d4f6b9-9j7rz,Uid:b615a421-49d1-45f1-b503-d54e65d79724,Namespace:calico-system,Attempt:0,}" Jan 20 06:55:40.222690 containerd[1680]: time="2026-01-20T06:55:40.222641405Z" level=info msg="connecting to shim 8bfc1f73fe6674d0dfd18401a9a63e340c1f9d855ade552a88aead8dbcee5d29" address="unix:///run/containerd/s/ba4958c908169e9c956071c26903f914b965aeb0e760fdda40f06f2e3bee9feb" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:55:40.238141 kubelet[2906]: E0120 06:55:40.238102 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.238653 kubelet[2906]: W0120 06:55:40.238376 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.238653 kubelet[2906]: E0120 06:55:40.238421 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.240000 audit[3324]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:40.244185 kubelet[2906]: E0120 06:55:40.244168 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.244321 kubelet[2906]: W0120 06:55:40.244270 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.244321 kubelet[2906]: E0120 06:55:40.244292 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.240000 audit[3324]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffea54ce7e0 a2=0 a3=7ffea54ce7cc items=0 ppid=3052 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.240000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:40.250000 audit[3324]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:40.250000 audit[3324]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea54ce7e0 a2=0 a3=0 items=0 ppid=3052 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:40.274318 kubelet[2906]: E0120 06:55:40.274174 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:55:40.275215 systemd[1]: Started cri-containerd-8bfc1f73fe6674d0dfd18401a9a63e340c1f9d855ade552a88aead8dbcee5d29.scope - libcontainer container 8bfc1f73fe6674d0dfd18401a9a63e340c1f9d855ade552a88aead8dbcee5d29. Jan 20 06:55:40.303455 kubelet[2906]: E0120 06:55:40.303359 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.303455 kubelet[2906]: W0120 06:55:40.303377 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.303455 kubelet[2906]: E0120 06:55:40.303394 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.303690 kubelet[2906]: E0120 06:55:40.303682 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.303784 kubelet[2906]: W0120 06:55:40.303724 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.303784 kubelet[2906]: E0120 06:55:40.303734 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.302000 audit: BPF prog-id=151 op=LOAD Jan 20 06:55:40.304079 kubelet[2906]: E0120 06:55:40.303969 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.304503 kubelet[2906]: W0120 06:55:40.304115 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.304503 kubelet[2906]: E0120 06:55:40.304127 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.303000 audit: BPF prog-id=152 op=LOAD Jan 20 06:55:40.303000 audit[3329]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3313 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666331663733666536363734643064666431383430316139613633 Jan 20 06:55:40.303000 audit: BPF prog-id=152 op=UNLOAD Jan 20 06:55:40.303000 audit[3329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3313 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666331663733666536363734643064666431383430316139613633 Jan 20 06:55:40.303000 audit: BPF prog-id=153 op=LOAD Jan 20 06:55:40.303000 audit[3329]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3313 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666331663733666536363734643064666431383430316139613633 Jan 20 06:55:40.303000 audit: BPF prog-id=154 op=LOAD Jan 20 06:55:40.303000 audit[3329]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3313 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666331663733666536363734643064666431383430316139613633 Jan 20 06:55:40.303000 audit: BPF prog-id=154 op=UNLOAD Jan 20 06:55:40.303000 audit[3329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3313 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666331663733666536363734643064666431383430316139613633 Jan 20 06:55:40.303000 audit: BPF prog-id=153 op=UNLOAD Jan 20 06:55:40.303000 audit[3329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3313 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666331663733666536363734643064666431383430316139613633 Jan 20 06:55:40.303000 audit: BPF prog-id=155 op=LOAD Jan 20 06:55:40.303000 audit[3329]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3313 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666331663733666536363734643064666431383430316139613633 Jan 20 06:55:40.306556 kubelet[2906]: E0120 06:55:40.304907 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.306556 kubelet[2906]: W0120 06:55:40.304928 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.306556 kubelet[2906]: E0120 06:55:40.304938 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.306556 kubelet[2906]: E0120 06:55:40.305714 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.306556 kubelet[2906]: W0120 06:55:40.305722 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.306556 kubelet[2906]: E0120 06:55:40.305732 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.306556 kubelet[2906]: E0120 06:55:40.306235 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.306556 kubelet[2906]: W0120 06:55:40.306243 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.306556 kubelet[2906]: E0120 06:55:40.306251 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.307179 kubelet[2906]: E0120 06:55:40.306859 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.307179 kubelet[2906]: W0120 06:55:40.306867 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.307179 kubelet[2906]: E0120 06:55:40.306874 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.307179 kubelet[2906]: E0120 06:55:40.307017 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.307179 kubelet[2906]: W0120 06:55:40.307022 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.307179 kubelet[2906]: E0120 06:55:40.307029 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.307408 kubelet[2906]: E0120 06:55:40.307361 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.307408 kubelet[2906]: W0120 06:55:40.307371 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.307408 kubelet[2906]: E0120 06:55:40.307398 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.307637 kubelet[2906]: E0120 06:55:40.307626 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.307637 kubelet[2906]: W0120 06:55:40.307634 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.307718 kubelet[2906]: E0120 06:55:40.307642 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.307887 kubelet[2906]: E0120 06:55:40.307878 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.307887 kubelet[2906]: W0120 06:55:40.307886 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.307954 kubelet[2906]: E0120 06:55:40.307893 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.308118 kubelet[2906]: E0120 06:55:40.308108 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.308118 kubelet[2906]: W0120 06:55:40.308117 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.308170 kubelet[2906]: E0120 06:55:40.308123 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.309014 kubelet[2906]: E0120 06:55:40.309002 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.309041 kubelet[2906]: W0120 06:55:40.309013 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.309041 kubelet[2906]: E0120 06:55:40.309022 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.309162 kubelet[2906]: E0120 06:55:40.309152 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.309192 kubelet[2906]: W0120 06:55:40.309172 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.309192 kubelet[2906]: E0120 06:55:40.309179 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.309300 kubelet[2906]: E0120 06:55:40.309293 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.309300 kubelet[2906]: W0120 06:55:40.309300 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.309348 kubelet[2906]: E0120 06:55:40.309306 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.309438 kubelet[2906]: E0120 06:55:40.309431 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.309438 kubelet[2906]: W0120 06:55:40.309438 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.309480 kubelet[2906]: E0120 06:55:40.309444 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.309588 kubelet[2906]: E0120 06:55:40.309580 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.309588 kubelet[2906]: W0120 06:55:40.309587 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.309638 kubelet[2906]: E0120 06:55:40.309593 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.309735 kubelet[2906]: E0120 06:55:40.309727 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.309735 kubelet[2906]: W0120 06:55:40.309735 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.309777 kubelet[2906]: E0120 06:55:40.309740 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.309938 kubelet[2906]: E0120 06:55:40.309930 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.309938 kubelet[2906]: W0120 06:55:40.309938 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.309984 kubelet[2906]: E0120 06:55:40.309944 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.310083 kubelet[2906]: E0120 06:55:40.310075 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.310083 kubelet[2906]: W0120 06:55:40.310083 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.310125 kubelet[2906]: E0120 06:55:40.310089 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.324195 kubelet[2906]: E0120 06:55:40.323238 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.324195 kubelet[2906]: W0120 06:55:40.323257 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.324195 kubelet[2906]: E0120 06:55:40.323276 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.324483 kubelet[2906]: I0120 06:55:40.324467 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/506bd27e-3197-4d34-a858-e04017d318df-registration-dir\") pod \"csi-node-driver-j8w7k\" (UID: \"506bd27e-3197-4d34-a858-e04017d318df\") " pod="calico-system/csi-node-driver-j8w7k" Jan 20 06:55:40.325013 kubelet[2906]: E0120 06:55:40.324954 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.325447 kubelet[2906]: W0120 06:55:40.325433 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.325698 kubelet[2906]: E0120 06:55:40.325687 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.326018 kubelet[2906]: E0120 06:55:40.325976 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.326157 kubelet[2906]: W0120 06:55:40.326146 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.326297 kubelet[2906]: E0120 06:55:40.326288 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.326602 kubelet[2906]: I0120 06:55:40.326514 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/506bd27e-3197-4d34-a858-e04017d318df-socket-dir\") pod \"csi-node-driver-j8w7k\" (UID: \"506bd27e-3197-4d34-a858-e04017d318df\") " pod="calico-system/csi-node-driver-j8w7k" Jan 20 06:55:40.326887 kubelet[2906]: E0120 06:55:40.326860 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.327072 kubelet[2906]: W0120 06:55:40.326986 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.327327 kubelet[2906]: E0120 06:55:40.327133 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.328035 kubelet[2906]: E0120 06:55:40.327931 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.328239 kubelet[2906]: W0120 06:55:40.328131 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.328345 kubelet[2906]: E0120 06:55:40.328298 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.329720 kubelet[2906]: E0120 06:55:40.329706 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.329841 kubelet[2906]: W0120 06:55:40.329785 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.329841 kubelet[2906]: E0120 06:55:40.329806 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.329954 kubelet[2906]: I0120 06:55:40.329939 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vggc\" (UniqueName: \"kubernetes.io/projected/506bd27e-3197-4d34-a858-e04017d318df-kube-api-access-7vggc\") pod \"csi-node-driver-j8w7k\" (UID: \"506bd27e-3197-4d34-a858-e04017d318df\") " pod="calico-system/csi-node-driver-j8w7k" Jan 20 06:55:40.330136 kubelet[2906]: E0120 06:55:40.330129 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.330200 kubelet[2906]: W0120 06:55:40.330174 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.330200 kubelet[2906]: E0120 06:55:40.330185 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.330458 kubelet[2906]: E0120 06:55:40.330440 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.330458 kubelet[2906]: W0120 06:55:40.330448 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.330600 kubelet[2906]: E0120 06:55:40.330524 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.330675 kubelet[2906]: E0120 06:55:40.330670 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.330753 kubelet[2906]: W0120 06:55:40.330707 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.330753 kubelet[2906]: E0120 06:55:40.330724 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.331047 kubelet[2906]: E0120 06:55:40.331039 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.331167 kubelet[2906]: W0120 06:55:40.331086 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.331167 kubelet[2906]: E0120 06:55:40.331096 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.331167 kubelet[2906]: I0120 06:55:40.331122 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/506bd27e-3197-4d34-a858-e04017d318df-varrun\") pod \"csi-node-driver-j8w7k\" (UID: \"506bd27e-3197-4d34-a858-e04017d318df\") " pod="calico-system/csi-node-driver-j8w7k" Jan 20 06:55:40.331362 kubelet[2906]: E0120 06:55:40.331346 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.331362 kubelet[2906]: W0120 06:55:40.331354 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.331475 kubelet[2906]: E0120 06:55:40.331420 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.331475 kubelet[2906]: I0120 06:55:40.331456 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/506bd27e-3197-4d34-a858-e04017d318df-kubelet-dir\") pod \"csi-node-driver-j8w7k\" (UID: \"506bd27e-3197-4d34-a858-e04017d318df\") " pod="calico-system/csi-node-driver-j8w7k" Jan 20 06:55:40.331738 kubelet[2906]: E0120 06:55:40.331718 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.331738 kubelet[2906]: W0120 06:55:40.331729 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.331880 kubelet[2906]: E0120 06:55:40.331767 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.331973 kubelet[2906]: E0120 06:55:40.331967 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.331973 kubelet[2906]: W0120 06:55:40.331987 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.332175 kubelet[2906]: E0120 06:55:40.332147 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.332522 kubelet[2906]: E0120 06:55:40.332512 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.332607 kubelet[2906]: W0120 06:55:40.332567 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.332607 kubelet[2906]: E0120 06:55:40.332578 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.333301 kubelet[2906]: E0120 06:55:40.333263 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.333301 kubelet[2906]: W0120 06:55:40.333272 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.333301 kubelet[2906]: E0120 06:55:40.333281 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.356249 containerd[1680]: time="2026-01-20T06:55:40.356135898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f44d4f6b9-9j7rz,Uid:b615a421-49d1-45f1-b503-d54e65d79724,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bfc1f73fe6674d0dfd18401a9a63e340c1f9d855ade552a88aead8dbcee5d29\"" Jan 20 06:55:40.359656 containerd[1680]: time="2026-01-20T06:55:40.359629525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 06:55:40.380015 containerd[1680]: time="2026-01-20T06:55:40.379971651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8fx7p,Uid:cc988606-a447-472d-a665-8dd8c9776c19,Namespace:calico-system,Attempt:0,}" Jan 20 06:55:40.411874 containerd[1680]: time="2026-01-20T06:55:40.411180613Z" level=info msg="connecting to shim 733a7bc860ed730126b1de773d3c7a8e0c832f6e1ff76eba287dca6ac36896f5" address="unix:///run/containerd/s/0fe5767b72c66f17079ddcdafbe021ab3425f413c17d8d001b36cc3f5969184d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:55:40.432621 kubelet[2906]: E0120 06:55:40.432600 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.432776 kubelet[2906]: W0120 06:55:40.432765 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.432848 kubelet[2906]: E0120 06:55:40.432823 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.433029 kubelet[2906]: E0120 06:55:40.433023 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.433072 kubelet[2906]: W0120 06:55:40.433067 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.433136 kubelet[2906]: E0120 06:55:40.433102 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.433386 kubelet[2906]: E0120 06:55:40.433377 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.433446 kubelet[2906]: W0120 06:55:40.433440 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.433528 kubelet[2906]: E0120 06:55:40.433514 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.433727 kubelet[2906]: E0120 06:55:40.433712 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.433756 kubelet[2906]: W0120 06:55:40.433726 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.434204 kubelet[2906]: E0120 06:55:40.433854 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.434436 kubelet[2906]: E0120 06:55:40.434346 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.434436 kubelet[2906]: W0120 06:55:40.434357 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.434436 kubelet[2906]: E0120 06:55:40.434369 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.434726 kubelet[2906]: E0120 06:55:40.434587 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.434726 kubelet[2906]: W0120 06:55:40.434594 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.434726 kubelet[2906]: E0120 06:55:40.434601 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.435980 kubelet[2906]: E0120 06:55:40.435959 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.436028 kubelet[2906]: W0120 06:55:40.435972 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.436028 kubelet[2906]: E0120 06:55:40.435994 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.436641 kubelet[2906]: E0120 06:55:40.436303 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.436641 kubelet[2906]: W0120 06:55:40.436314 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.436641 kubelet[2906]: E0120 06:55:40.436347 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.437602 kubelet[2906]: E0120 06:55:40.436881 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.437602 kubelet[2906]: W0120 06:55:40.436891 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.437602 kubelet[2906]: E0120 06:55:40.436929 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.437602 kubelet[2906]: E0120 06:55:40.437313 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.437602 kubelet[2906]: W0120 06:55:40.437320 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.437602 kubelet[2906]: E0120 06:55:40.437365 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.437888 kubelet[2906]: E0120 06:55:40.437821 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.438013 systemd[1]: Started cri-containerd-733a7bc860ed730126b1de773d3c7a8e0c832f6e1ff76eba287dca6ac36896f5.scope - libcontainer container 733a7bc860ed730126b1de773d3c7a8e0c832f6e1ff76eba287dca6ac36896f5. Jan 20 06:55:40.439346 kubelet[2906]: W0120 06:55:40.438981 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.439346 kubelet[2906]: E0120 06:55:40.439068 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.439724 kubelet[2906]: E0120 06:55:40.439503 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.439724 kubelet[2906]: W0120 06:55:40.439512 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.440228 kubelet[2906]: E0120 06:55:40.440214 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.440421 kubelet[2906]: W0120 06:55:40.440341 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.440688 kubelet[2906]: E0120 06:55:40.440641 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.440688 kubelet[2906]: W0120 06:55:40.440649 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.441350 kubelet[2906]: E0120 06:55:40.441253 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.441350 kubelet[2906]: W0120 06:55:40.441264 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.441350 kubelet[2906]: E0120 06:55:40.441270 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.441350 kubelet[2906]: E0120 06:55:40.441275 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.441350 kubelet[2906]: E0120 06:55:40.441288 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.441350 kubelet[2906]: E0120 06:55:40.441301 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.441980 kubelet[2906]: E0120 06:55:40.441880 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.441980 kubelet[2906]: W0120 06:55:40.441892 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.441980 kubelet[2906]: E0120 06:55:40.441907 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.442098 kubelet[2906]: E0120 06:55:40.442092 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.442208 kubelet[2906]: W0120 06:55:40.442193 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.442611 kubelet[2906]: E0120 06:55:40.442253 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.442772 kubelet[2906]: E0120 06:55:40.442760 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.442772 kubelet[2906]: W0120 06:55:40.442771 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.442867 kubelet[2906]: E0120 06:55:40.442791 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.443100 kubelet[2906]: E0120 06:55:40.443032 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.443100 kubelet[2906]: W0120 06:55:40.443039 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.443100 kubelet[2906]: E0120 06:55:40.443047 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.443858 kubelet[2906]: E0120 06:55:40.443627 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.443858 kubelet[2906]: W0120 06:55:40.443719 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.443858 kubelet[2906]: E0120 06:55:40.443736 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.444465 kubelet[2906]: E0120 06:55:40.444297 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.444465 kubelet[2906]: W0120 06:55:40.444389 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.444465 kubelet[2906]: E0120 06:55:40.444448 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.445920 kubelet[2906]: E0120 06:55:40.445908 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.446081 kubelet[2906]: W0120 06:55:40.445982 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.446081 kubelet[2906]: E0120 06:55:40.446010 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.446328 kubelet[2906]: E0120 06:55:40.446244 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.446328 kubelet[2906]: W0120 06:55:40.446252 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.446451 kubelet[2906]: E0120 06:55:40.446433 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.446484 kubelet[2906]: E0120 06:55:40.446437 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.446484 kubelet[2906]: W0120 06:55:40.446476 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.446867 kubelet[2906]: E0120 06:55:40.446816 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.446904 kubelet[2906]: E0120 06:55:40.446895 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.446924 kubelet[2906]: W0120 06:55:40.446903 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.446924 kubelet[2906]: E0120 06:55:40.446912 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.454193 kubelet[2906]: E0120 06:55:40.454174 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:40.454193 kubelet[2906]: W0120 06:55:40.454189 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:40.454313 kubelet[2906]: E0120 06:55:40.454201 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:40.455000 audit: BPF prog-id=156 op=LOAD Jan 20 06:55:40.456000 audit: BPF prog-id=157 op=LOAD Jan 20 06:55:40.456000 audit[3418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3408 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733336137626338363065643733303132366231646537373364336337 Jan 20 06:55:40.456000 audit: BPF prog-id=157 op=UNLOAD Jan 20 06:55:40.456000 audit[3418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3408 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733336137626338363065643733303132366231646537373364336337 Jan 20 06:55:40.456000 audit: BPF prog-id=158 op=LOAD Jan 20 06:55:40.456000 audit[3418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3408 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733336137626338363065643733303132366231646537373364336337 Jan 20 06:55:40.456000 audit: BPF prog-id=159 op=LOAD Jan 20 06:55:40.456000 audit[3418]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3408 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733336137626338363065643733303132366231646537373364336337 Jan 20 06:55:40.456000 audit: BPF prog-id=159 op=UNLOAD Jan 20 06:55:40.456000 audit[3418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3408 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733336137626338363065643733303132366231646537373364336337 Jan 20 06:55:40.456000 audit: BPF prog-id=158 op=UNLOAD Jan 20 06:55:40.456000 audit[3418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3408 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733336137626338363065643733303132366231646537373364336337 Jan 20 06:55:40.456000 audit: BPF prog-id=160 op=LOAD Jan 20 06:55:40.456000 audit[3418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3408 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:40.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733336137626338363065643733303132366231646537373364336337 Jan 20 06:55:40.472192 containerd[1680]: time="2026-01-20T06:55:40.472145292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8fx7p,Uid:cc988606-a447-472d-a665-8dd8c9776c19,Namespace:calico-system,Attempt:0,} returns sandbox id \"733a7bc860ed730126b1de773d3c7a8e0c832f6e1ff76eba287dca6ac36896f5\"" Jan 20 06:55:41.928267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1589386142.mount: Deactivated successfully. Jan 20 06:55:42.209635 kubelet[2906]: E0120 06:55:42.209495 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:55:42.920707 containerd[1680]: time="2026-01-20T06:55:42.920644648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:42.922492 containerd[1680]: time="2026-01-20T06:55:42.922456907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 20 06:55:42.923926 containerd[1680]: time="2026-01-20T06:55:42.923898107Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:42.929144 containerd[1680]: time="2026-01-20T06:55:42.927817894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:42.929226 containerd[1680]: time="2026-01-20T06:55:42.929093485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.569432821s" Jan 20 06:55:42.929226 containerd[1680]: time="2026-01-20T06:55:42.929216759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 06:55:42.931060 containerd[1680]: time="2026-01-20T06:55:42.931040388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 06:55:42.946400 containerd[1680]: time="2026-01-20T06:55:42.946319400Z" level=info msg="CreateContainer within sandbox \"8bfc1f73fe6674d0dfd18401a9a63e340c1f9d855ade552a88aead8dbcee5d29\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 06:55:42.960984 containerd[1680]: time="2026-01-20T06:55:42.960950764Z" level=info msg="Container bdf549bf1424dd79068813e7ceda181bf053242b19681eee6d076661990f6830: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:55:42.970621 containerd[1680]: time="2026-01-20T06:55:42.970559907Z" level=info msg="CreateContainer within sandbox \"8bfc1f73fe6674d0dfd18401a9a63e340c1f9d855ade552a88aead8dbcee5d29\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bdf549bf1424dd79068813e7ceda181bf053242b19681eee6d076661990f6830\"" Jan 20 06:55:42.971848 containerd[1680]: time="2026-01-20T06:55:42.971775576Z" level=info msg="StartContainer for \"bdf549bf1424dd79068813e7ceda181bf053242b19681eee6d076661990f6830\"" Jan 20 06:55:42.973464 containerd[1680]: time="2026-01-20T06:55:42.973441580Z" level=info msg="connecting to shim bdf549bf1424dd79068813e7ceda181bf053242b19681eee6d076661990f6830" address="unix:///run/containerd/s/ba4958c908169e9c956071c26903f914b965aeb0e760fdda40f06f2e3bee9feb" protocol=ttrpc version=3 Jan 20 06:55:42.995021 systemd[1]: Started cri-containerd-bdf549bf1424dd79068813e7ceda181bf053242b19681eee6d076661990f6830.scope - libcontainer container bdf549bf1424dd79068813e7ceda181bf053242b19681eee6d076661990f6830. Jan 20 06:55:43.004000 audit: BPF prog-id=161 op=LOAD Jan 20 06:55:43.007000 kernel: kauditd_printk_skb: 75 callbacks suppressed Jan 20 06:55:43.007043 kernel: audit: type=1334 audit(1768892143.004:553): prog-id=161 op=LOAD Jan 20 06:55:43.007000 audit: BPF prog-id=162 op=LOAD Jan 20 06:55:43.007000 audit[3481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3313 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:43.013120 kernel: audit: type=1334 audit(1768892143.007:554): prog-id=162 op=LOAD Jan 20 06:55:43.013155 kernel: audit: type=1300 audit(1768892143.007:554): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3313 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:43.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264663534396266313432346464373930363838313365376365646131 Jan 20 06:55:43.017591 kernel: audit: type=1327 audit(1768892143.007:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264663534396266313432346464373930363838313365376365646131 Jan 20 06:55:43.007000 audit: BPF prog-id=162 op=UNLOAD Jan 20 06:55:43.020892 kernel: audit: type=1334 audit(1768892143.007:555): prog-id=162 op=UNLOAD Jan 20 06:55:43.007000 audit[3481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3313 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:43.023342 kernel: audit: type=1300 audit(1768892143.007:555): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3313 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:43.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264663534396266313432346464373930363838313365376365646131 Jan 20 06:55:43.007000 audit: BPF prog-id=163 op=LOAD Jan 20 06:55:43.030921 kernel: audit: type=1327 audit(1768892143.007:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264663534396266313432346464373930363838313365376365646131 Jan 20 06:55:43.030957 kernel: audit: type=1334 audit(1768892143.007:556): prog-id=163 op=LOAD Jan 20 06:55:43.007000 audit[3481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3313 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:43.033285 kernel: audit: type=1300 audit(1768892143.007:556): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3313 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:43.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264663534396266313432346464373930363838313365376365646131 Jan 20 06:55:43.037586 kernel: audit: type=1327 audit(1768892143.007:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264663534396266313432346464373930363838313365376365646131 Jan 20 06:55:43.007000 audit: BPF prog-id=164 op=LOAD Jan 20 06:55:43.007000 audit[3481]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3313 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:43.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264663534396266313432346464373930363838313365376365646131 Jan 20 06:55:43.007000 audit: BPF prog-id=164 op=UNLOAD Jan 20 06:55:43.007000 audit[3481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3313 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:43.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264663534396266313432346464373930363838313365376365646131 Jan 20 06:55:43.007000 audit: BPF prog-id=163 op=UNLOAD Jan 20 06:55:43.007000 audit[3481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3313 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:43.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264663534396266313432346464373930363838313365376365646131 Jan 20 06:55:43.007000 audit: BPF prog-id=165 op=LOAD Jan 20 06:55:43.007000 audit[3481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3313 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:43.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264663534396266313432346464373930363838313365376365646131 Jan 20 06:55:43.062190 containerd[1680]: time="2026-01-20T06:55:43.062149319Z" level=info msg="StartContainer for \"bdf549bf1424dd79068813e7ceda181bf053242b19681eee6d076661990f6830\" returns successfully" Jan 20 06:55:43.328905 kubelet[2906]: E0120 06:55:43.328310 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.329723 kubelet[2906]: W0120 06:55:43.329334 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.329723 kubelet[2906]: E0120 06:55:43.329451 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.329723 kubelet[2906]: E0120 06:55:43.329626 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.329723 kubelet[2906]: W0120 06:55:43.329634 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.329723 kubelet[2906]: E0120 06:55:43.329651 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.330030 kubelet[2906]: E0120 06:55:43.329929 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.330030 kubelet[2906]: W0120 06:55:43.329937 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.330030 kubelet[2906]: E0120 06:55:43.329945 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.330269 kubelet[2906]: E0120 06:55:43.330213 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.330537 kubelet[2906]: W0120 06:55:43.330308 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.330537 kubelet[2906]: E0120 06:55:43.330320 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.330808 kubelet[2906]: E0120 06:55:43.330733 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.330808 kubelet[2906]: W0120 06:55:43.330750 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.330808 kubelet[2906]: E0120 06:55:43.330758 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.331166 kubelet[2906]: E0120 06:55:43.331033 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.331166 kubelet[2906]: W0120 06:55:43.331040 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.331166 kubelet[2906]: E0120 06:55:43.331048 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.331399 kubelet[2906]: E0120 06:55:43.331359 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.331631 kubelet[2906]: W0120 06:55:43.331587 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.331631 kubelet[2906]: E0120 06:55:43.331601 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.332847 kubelet[2906]: E0120 06:55:43.332002 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.332997 kubelet[2906]: W0120 06:55:43.332932 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.332997 kubelet[2906]: E0120 06:55:43.332949 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.333192 kubelet[2906]: E0120 06:55:43.333185 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.333312 kubelet[2906]: W0120 06:55:43.333243 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.333312 kubelet[2906]: E0120 06:55:43.333253 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.333397 kubelet[2906]: E0120 06:55:43.333391 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.333498 kubelet[2906]: W0120 06:55:43.333424 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.333498 kubelet[2906]: E0120 06:55:43.333440 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.333588 kubelet[2906]: E0120 06:55:43.333583 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.333697 kubelet[2906]: W0120 06:55:43.333624 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.333697 kubelet[2906]: E0120 06:55:43.333637 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.333783 kubelet[2906]: E0120 06:55:43.333778 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.333850 kubelet[2906]: W0120 06:55:43.333810 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.333898 kubelet[2906]: E0120 06:55:43.333883 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.334116 kubelet[2906]: E0120 06:55:43.334041 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.334116 kubelet[2906]: W0120 06:55:43.334048 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.334116 kubelet[2906]: E0120 06:55:43.334054 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.334227 kubelet[2906]: E0120 06:55:43.334221 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.334785 kubelet[2906]: W0120 06:55:43.334296 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.334785 kubelet[2906]: E0120 06:55:43.334305 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.335054 kubelet[2906]: E0120 06:55:43.334990 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.335054 kubelet[2906]: W0120 06:55:43.334999 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.335054 kubelet[2906]: E0120 06:55:43.335012 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.357086 kubelet[2906]: I0120 06:55:43.357033 2906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f44d4f6b9-9j7rz" podStartSLOduration=1.785123938 podStartE2EDuration="4.357019254s" podCreationTimestamp="2026-01-20 06:55:39 +0000 UTC" firstStartedPulling="2026-01-20 06:55:40.359224099 +0000 UTC m=+19.270861858" lastFinishedPulling="2026-01-20 06:55:42.931119415 +0000 UTC m=+21.842757174" observedRunningTime="2026-01-20 06:55:43.35666526 +0000 UTC m=+22.268303042" watchObservedRunningTime="2026-01-20 06:55:43.357019254 +0000 UTC m=+22.268657063" Jan 20 06:55:43.359275 kubelet[2906]: E0120 06:55:43.359250 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.359275 kubelet[2906]: W0120 06:55:43.359270 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.359455 kubelet[2906]: E0120 06:55:43.359300 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.359496 kubelet[2906]: E0120 06:55:43.359484 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.359529 kubelet[2906]: W0120 06:55:43.359499 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.359529 kubelet[2906]: E0120 06:55:43.359506 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.359722 kubelet[2906]: E0120 06:55:43.359712 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.359722 kubelet[2906]: W0120 06:55:43.359721 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.359863 kubelet[2906]: E0120 06:55:43.359728 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.361741 kubelet[2906]: E0120 06:55:43.361721 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.361741 kubelet[2906]: W0120 06:55:43.361734 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.361741 kubelet[2906]: E0120 06:55:43.361748 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.361915 kubelet[2906]: E0120 06:55:43.361890 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.361915 kubelet[2906]: W0120 06:55:43.361895 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.361979 kubelet[2906]: E0120 06:55:43.361965 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.362050 kubelet[2906]: E0120 06:55:43.362039 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.362050 kubelet[2906]: W0120 06:55:43.362046 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.362175 kubelet[2906]: E0120 06:55:43.362070 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.362175 kubelet[2906]: E0120 06:55:43.362153 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.362175 kubelet[2906]: W0120 06:55:43.362159 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.362341 kubelet[2906]: E0120 06:55:43.362314 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.362341 kubelet[2906]: W0120 06:55:43.362341 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.362391 kubelet[2906]: E0120 06:55:43.362349 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.362760 kubelet[2906]: E0120 06:55:43.362632 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.362760 kubelet[2906]: W0120 06:55:43.362644 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.362760 kubelet[2906]: E0120 06:55:43.362655 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.362760 kubelet[2906]: E0120 06:55:43.362708 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.364202 kubelet[2906]: E0120 06:55:43.364113 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.364202 kubelet[2906]: W0120 06:55:43.364124 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.364202 kubelet[2906]: E0120 06:55:43.364139 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.364330 kubelet[2906]: E0120 06:55:43.364324 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.364428 kubelet[2906]: W0120 06:55:43.364360 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.364428 kubelet[2906]: E0120 06:55:43.364380 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.364580 kubelet[2906]: E0120 06:55:43.364515 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.364580 kubelet[2906]: W0120 06:55:43.364522 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.364580 kubelet[2906]: E0120 06:55:43.364535 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.364799 kubelet[2906]: E0120 06:55:43.364719 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.364799 kubelet[2906]: W0120 06:55:43.364726 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.364799 kubelet[2906]: E0120 06:55:43.364743 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.364923 kubelet[2906]: E0120 06:55:43.364917 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.365126 kubelet[2906]: W0120 06:55:43.364951 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.365126 kubelet[2906]: E0120 06:55:43.364964 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.365173 kubelet[2906]: E0120 06:55:43.365156 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.365173 kubelet[2906]: W0120 06:55:43.365164 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.365248 kubelet[2906]: E0120 06:55:43.365173 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.365316 kubelet[2906]: E0120 06:55:43.365306 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.365316 kubelet[2906]: W0120 06:55:43.365316 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.365358 kubelet[2906]: E0120 06:55:43.365322 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.365523 kubelet[2906]: E0120 06:55:43.365515 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.365565 kubelet[2906]: W0120 06:55:43.365559 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.365612 kubelet[2906]: E0120 06:55:43.365604 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:43.365805 kubelet[2906]: E0120 06:55:43.365776 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:43.366334 kubelet[2906]: W0120 06:55:43.366319 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:43.366452 kubelet[2906]: E0120 06:55:43.366444 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.210559 kubelet[2906]: E0120 06:55:44.209872 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:55:44.311582 kubelet[2906]: I0120 06:55:44.311561 2906 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 06:55:44.339597 kubelet[2906]: E0120 06:55:44.339568 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.340023 kubelet[2906]: W0120 06:55:44.339958 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.340023 kubelet[2906]: E0120 06:55:44.339984 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.340365 kubelet[2906]: E0120 06:55:44.340342 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.340521 kubelet[2906]: W0120 06:55:44.340455 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.340521 kubelet[2906]: E0120 06:55:44.340471 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.340817 kubelet[2906]: E0120 06:55:44.340723 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.340817 kubelet[2906]: W0120 06:55:44.340741 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.340817 kubelet[2906]: E0120 06:55:44.340749 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.341050 kubelet[2906]: E0120 06:55:44.341014 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.341050 kubelet[2906]: W0120 06:55:44.341022 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.341050 kubelet[2906]: E0120 06:55:44.341031 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.341359 kubelet[2906]: E0120 06:55:44.341314 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.341359 kubelet[2906]: W0120 06:55:44.341322 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.341359 kubelet[2906]: E0120 06:55:44.341329 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.341588 kubelet[2906]: E0120 06:55:44.341543 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.341588 kubelet[2906]: W0120 06:55:44.341549 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.341588 kubelet[2906]: E0120 06:55:44.341557 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.341817 kubelet[2906]: E0120 06:55:44.341775 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.341817 kubelet[2906]: W0120 06:55:44.341782 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.341817 kubelet[2906]: E0120 06:55:44.341789 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.342071 kubelet[2906]: E0120 06:55:44.342056 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.342143 kubelet[2906]: W0120 06:55:44.342103 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.342143 kubelet[2906]: E0120 06:55:44.342111 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.342345 kubelet[2906]: E0120 06:55:44.342334 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.342438 kubelet[2906]: W0120 06:55:44.342379 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.342438 kubelet[2906]: E0120 06:55:44.342387 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.342527 kubelet[2906]: E0120 06:55:44.342522 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.342599 kubelet[2906]: W0120 06:55:44.342566 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.342599 kubelet[2906]: E0120 06:55:44.342573 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.342823 kubelet[2906]: E0120 06:55:44.342780 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.342823 kubelet[2906]: W0120 06:55:44.342786 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.342823 kubelet[2906]: E0120 06:55:44.342792 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.343063 kubelet[2906]: E0120 06:55:44.343019 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.343063 kubelet[2906]: W0120 06:55:44.343026 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.343063 kubelet[2906]: E0120 06:55:44.343032 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.343299 kubelet[2906]: E0120 06:55:44.343255 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.343299 kubelet[2906]: W0120 06:55:44.343262 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.343299 kubelet[2906]: E0120 06:55:44.343268 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.343572 kubelet[2906]: E0120 06:55:44.343495 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.343572 kubelet[2906]: W0120 06:55:44.343503 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.343572 kubelet[2906]: E0120 06:55:44.343509 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.343731 kubelet[2906]: E0120 06:55:44.343686 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.343731 kubelet[2906]: W0120 06:55:44.343693 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.343731 kubelet[2906]: E0120 06:55:44.343699 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.368350 kubelet[2906]: E0120 06:55:44.368303 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.368350 kubelet[2906]: W0120 06:55:44.368330 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.368350 kubelet[2906]: E0120 06:55:44.368401 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.369215 kubelet[2906]: E0120 06:55:44.369172 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.369215 kubelet[2906]: W0120 06:55:44.369200 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.369371 kubelet[2906]: E0120 06:55:44.369274 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.369530 kubelet[2906]: E0120 06:55:44.369522 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.369587 kubelet[2906]: W0120 06:55:44.369581 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.369718 kubelet[2906]: E0120 06:55:44.369627 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.369908 kubelet[2906]: E0120 06:55:44.369891 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.369960 kubelet[2906]: W0120 06:55:44.369909 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.369960 kubelet[2906]: E0120 06:55:44.369929 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.370084 kubelet[2906]: E0120 06:55:44.370072 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.370084 kubelet[2906]: W0120 06:55:44.370081 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.370180 kubelet[2906]: E0120 06:55:44.370092 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.370215 kubelet[2906]: E0120 06:55:44.370209 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.370247 kubelet[2906]: W0120 06:55:44.370216 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.370247 kubelet[2906]: E0120 06:55:44.370226 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.370380 kubelet[2906]: E0120 06:55:44.370371 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.370380 kubelet[2906]: W0120 06:55:44.370379 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.370426 kubelet[2906]: E0120 06:55:44.370389 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.370777 kubelet[2906]: E0120 06:55:44.370727 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.370777 kubelet[2906]: W0120 06:55:44.370738 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.370777 kubelet[2906]: E0120 06:55:44.370752 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.371045 kubelet[2906]: E0120 06:55:44.370995 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.371045 kubelet[2906]: W0120 06:55:44.371003 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.371045 kubelet[2906]: E0120 06:55:44.371011 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.371442 kubelet[2906]: E0120 06:55:44.371408 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.371695 kubelet[2906]: W0120 06:55:44.371628 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.371695 kubelet[2906]: E0120 06:55:44.371668 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.372402 kubelet[2906]: E0120 06:55:44.372379 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.372402 kubelet[2906]: W0120 06:55:44.372389 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.373045 kubelet[2906]: E0120 06:55:44.372852 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.373411 kubelet[2906]: E0120 06:55:44.373402 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.373676 kubelet[2906]: W0120 06:55:44.373546 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.373676 kubelet[2906]: E0120 06:55:44.373561 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.374114 kubelet[2906]: E0120 06:55:44.374084 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.374298 kubelet[2906]: W0120 06:55:44.374210 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.374298 kubelet[2906]: E0120 06:55:44.374245 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.375114 kubelet[2906]: E0120 06:55:44.375094 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.375256 kubelet[2906]: W0120 06:55:44.375187 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.375338 kubelet[2906]: E0120 06:55:44.375309 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.375531 kubelet[2906]: E0120 06:55:44.375486 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.375531 kubelet[2906]: W0120 06:55:44.375505 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.375637 kubelet[2906]: E0120 06:55:44.375518 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.376057 kubelet[2906]: E0120 06:55:44.376017 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.376057 kubelet[2906]: W0120 06:55:44.376056 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.376057 kubelet[2906]: E0120 06:55:44.376085 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.376551 kubelet[2906]: E0120 06:55:44.376535 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.376551 kubelet[2906]: W0120 06:55:44.376548 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.376608 kubelet[2906]: E0120 06:55:44.376560 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.376981 kubelet[2906]: E0120 06:55:44.376967 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:55:44.376981 kubelet[2906]: W0120 06:55:44.376977 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:55:44.377042 kubelet[2906]: E0120 06:55:44.376985 2906 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:55:44.497389 containerd[1680]: time="2026-01-20T06:55:44.497253519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:44.498950 containerd[1680]: time="2026-01-20T06:55:44.498924903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 20 06:55:44.500547 containerd[1680]: time="2026-01-20T06:55:44.500522945Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:44.503389 containerd[1680]: time="2026-01-20T06:55:44.503340716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:44.504004 containerd[1680]: time="2026-01-20T06:55:44.503950088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.572882529s" Jan 20 06:55:44.504004 containerd[1680]: time="2026-01-20T06:55:44.503975286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 06:55:44.506004 containerd[1680]: time="2026-01-20T06:55:44.505971205Z" level=info msg="CreateContainer within sandbox \"733a7bc860ed730126b1de773d3c7a8e0c832f6e1ff76eba287dca6ac36896f5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 06:55:44.525009 containerd[1680]: time="2026-01-20T06:55:44.524214152Z" level=info msg="Container 51dee5ec892a2574ed04c99f1dddae229538f793e007b31bca31fcce41d34477: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:55:44.524334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050110375.mount: Deactivated successfully. Jan 20 06:55:44.537424 containerd[1680]: time="2026-01-20T06:55:44.537375243Z" level=info msg="CreateContainer within sandbox \"733a7bc860ed730126b1de773d3c7a8e0c832f6e1ff76eba287dca6ac36896f5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"51dee5ec892a2574ed04c99f1dddae229538f793e007b31bca31fcce41d34477\"" Jan 20 06:55:44.537916 containerd[1680]: time="2026-01-20T06:55:44.537894216Z" level=info msg="StartContainer for \"51dee5ec892a2574ed04c99f1dddae229538f793e007b31bca31fcce41d34477\"" Jan 20 06:55:44.540446 containerd[1680]: time="2026-01-20T06:55:44.540358499Z" level=info msg="connecting to shim 51dee5ec892a2574ed04c99f1dddae229538f793e007b31bca31fcce41d34477" address="unix:///run/containerd/s/0fe5767b72c66f17079ddcdafbe021ab3425f413c17d8d001b36cc3f5969184d" protocol=ttrpc version=3 Jan 20 06:55:44.562005 systemd[1]: Started cri-containerd-51dee5ec892a2574ed04c99f1dddae229538f793e007b31bca31fcce41d34477.scope - libcontainer container 51dee5ec892a2574ed04c99f1dddae229538f793e007b31bca31fcce41d34477. Jan 20 06:55:44.618000 audit: BPF prog-id=166 op=LOAD Jan 20 06:55:44.618000 audit[3590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3408 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:44.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646565356563383932613235373465643034633939663164646461 Jan 20 06:55:44.618000 audit: BPF prog-id=167 op=LOAD Jan 20 06:55:44.618000 audit[3590]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3408 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:44.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646565356563383932613235373465643034633939663164646461 Jan 20 06:55:44.618000 audit: BPF prog-id=167 op=UNLOAD Jan 20 06:55:44.618000 audit[3590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3408 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:44.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646565356563383932613235373465643034633939663164646461 Jan 20 06:55:44.618000 audit: BPF prog-id=166 op=UNLOAD Jan 20 06:55:44.618000 audit[3590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3408 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:44.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646565356563383932613235373465643034633939663164646461 Jan 20 06:55:44.618000 audit: BPF prog-id=168 op=LOAD Jan 20 06:55:44.618000 audit[3590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3408 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:44.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646565356563383932613235373465643034633939663164646461 Jan 20 06:55:44.645060 containerd[1680]: time="2026-01-20T06:55:44.645026825Z" level=info msg="StartContainer for \"51dee5ec892a2574ed04c99f1dddae229538f793e007b31bca31fcce41d34477\" returns successfully" Jan 20 06:55:44.656353 systemd[1]: cri-containerd-51dee5ec892a2574ed04c99f1dddae229538f793e007b31bca31fcce41d34477.scope: Deactivated successfully. Jan 20 06:55:44.659173 containerd[1680]: time="2026-01-20T06:55:44.659062848Z" level=info msg="received container exit event container_id:\"51dee5ec892a2574ed04c99f1dddae229538f793e007b31bca31fcce41d34477\" id:\"51dee5ec892a2574ed04c99f1dddae229538f793e007b31bca31fcce41d34477\" pid:3603 exited_at:{seconds:1768892144 nanos:658489546}" Jan 20 06:55:44.658000 audit: BPF prog-id=168 op=UNLOAD Jan 20 06:55:44.682126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51dee5ec892a2574ed04c99f1dddae229538f793e007b31bca31fcce41d34477-rootfs.mount: Deactivated successfully. Jan 20 06:55:46.209510 kubelet[2906]: E0120 06:55:46.209463 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:55:47.323086 containerd[1680]: time="2026-01-20T06:55:47.323035063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 06:55:47.403331 kubelet[2906]: I0120 06:55:47.402285 2906 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 06:55:47.435000 audit[3643]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:47.435000 audit[3643]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce6d16ad0 a2=0 a3=7ffce6d16abc items=0 ppid=3052 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:47.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:47.440000 audit[3643]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:55:47.440000 audit[3643]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffce6d16ad0 a2=0 a3=7ffce6d16abc items=0 ppid=3052 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:47.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:55:48.209580 kubelet[2906]: E0120 06:55:48.209139 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:55:50.209647 kubelet[2906]: E0120 06:55:50.209540 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:55:51.245495 containerd[1680]: time="2026-01-20T06:55:51.245095459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:51.247414 containerd[1680]: time="2026-01-20T06:55:51.247387119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 20 06:55:51.249302 containerd[1680]: time="2026-01-20T06:55:51.249113963Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:51.251661 containerd[1680]: time="2026-01-20T06:55:51.251636812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:55:51.252116 containerd[1680]: time="2026-01-20T06:55:51.252099184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.929018408s" Jan 20 06:55:51.252182 containerd[1680]: time="2026-01-20T06:55:51.252172536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 06:55:51.254531 containerd[1680]: time="2026-01-20T06:55:51.254433934Z" level=info msg="CreateContainer within sandbox \"733a7bc860ed730126b1de773d3c7a8e0c832f6e1ff76eba287dca6ac36896f5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 06:55:51.267335 containerd[1680]: time="2026-01-20T06:55:51.267297944Z" level=info msg="Container e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:55:51.277901 containerd[1680]: time="2026-01-20T06:55:51.277863330Z" level=info msg="CreateContainer within sandbox \"733a7bc860ed730126b1de773d3c7a8e0c832f6e1ff76eba287dca6ac36896f5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981\"" Jan 20 06:55:51.278477 containerd[1680]: time="2026-01-20T06:55:51.278418318Z" level=info msg="StartContainer for \"e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981\"" Jan 20 06:55:51.279647 containerd[1680]: time="2026-01-20T06:55:51.279620842Z" level=info msg="connecting to shim e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981" address="unix:///run/containerd/s/0fe5767b72c66f17079ddcdafbe021ab3425f413c17d8d001b36cc3f5969184d" protocol=ttrpc version=3 Jan 20 06:55:51.300059 systemd[1]: Started cri-containerd-e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981.scope - libcontainer container e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981. Jan 20 06:55:51.352000 audit: BPF prog-id=169 op=LOAD Jan 20 06:55:51.353978 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 20 06:55:51.354039 kernel: audit: type=1334 audit(1768892151.352:569): prog-id=169 op=LOAD Jan 20 06:55:51.352000 audit[3652]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3408 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:51.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316261363137306430613938653064363638343934653264303339 Jan 20 06:55:51.362185 kernel: audit: type=1300 audit(1768892151.352:569): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3408 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:51.362236 kernel: audit: type=1327 audit(1768892151.352:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316261363137306430613938653064363638343934653264303339 Jan 20 06:55:51.352000 audit: BPF prog-id=170 op=LOAD Jan 20 06:55:51.365157 kernel: audit: type=1334 audit(1768892151.352:570): prog-id=170 op=LOAD Jan 20 06:55:51.352000 audit[3652]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3408 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:51.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316261363137306430613938653064363638343934653264303339 Jan 20 06:55:51.372177 kernel: audit: type=1300 audit(1768892151.352:570): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3408 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:51.372234 kernel: audit: type=1327 audit(1768892151.352:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316261363137306430613938653064363638343934653264303339 Jan 20 06:55:51.353000 audit: BPF prog-id=170 op=UNLOAD Jan 20 06:55:51.375161 kernel: audit: type=1334 audit(1768892151.353:571): prog-id=170 op=UNLOAD Jan 20 06:55:51.353000 audit[3652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3408 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:51.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316261363137306430613938653064363638343934653264303339 Jan 20 06:55:51.382030 kernel: audit: type=1300 audit(1768892151.353:571): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3408 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:51.382089 kernel: audit: type=1327 audit(1768892151.353:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316261363137306430613938653064363638343934653264303339 Jan 20 06:55:51.353000 audit: BPF prog-id=169 op=UNLOAD Jan 20 06:55:51.388841 kernel: audit: type=1334 audit(1768892151.353:572): prog-id=169 op=UNLOAD Jan 20 06:55:51.353000 audit[3652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3408 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:51.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316261363137306430613938653064363638343934653264303339 Jan 20 06:55:51.353000 audit: BPF prog-id=171 op=LOAD Jan 20 06:55:51.353000 audit[3652]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3408 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:55:51.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316261363137306430613938653064363638343934653264303339 Jan 20 06:55:51.409301 containerd[1680]: time="2026-01-20T06:55:51.409271840Z" level=info msg="StartContainer for \"e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981\" returns successfully" Jan 20 06:55:52.209376 kubelet[2906]: E0120 06:55:52.209249 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:55:52.717447 systemd[1]: cri-containerd-e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981.scope: Deactivated successfully. Jan 20 06:55:52.717980 systemd[1]: cri-containerd-e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981.scope: Consumed 454ms CPU time, 195.1M memory peak, 171.3M written to disk. Jan 20 06:55:52.719045 containerd[1680]: time="2026-01-20T06:55:52.719012433Z" level=info msg="received container exit event container_id:\"e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981\" id:\"e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981\" pid:3664 exited_at:{seconds:1768892152 nanos:718570018}" Jan 20 06:55:52.721000 audit: BPF prog-id=171 op=UNLOAD Jan 20 06:55:52.741257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981-rootfs.mount: Deactivated successfully. Jan 20 06:55:52.773260 kubelet[2906]: I0120 06:55:52.773072 2906 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 06:55:52.819272 systemd[1]: Created slice kubepods-burstable-pod62cdcd04_03ba_4af6_a274_3cc1be14a458.slice - libcontainer container kubepods-burstable-pod62cdcd04_03ba_4af6_a274_3cc1be14a458.slice. Jan 20 06:55:52.830178 systemd[1]: Created slice kubepods-burstable-podc83e0ef7_8277_4d9b_af20_88f781d4eb21.slice - libcontainer container kubepods-burstable-podc83e0ef7_8277_4d9b_af20_88f781d4eb21.slice. Jan 20 06:55:52.841343 systemd[1]: Created slice kubepods-besteffort-podde3eadd9_d35e_43b1_acf5_88fe04381bf9.slice - libcontainer container kubepods-besteffort-podde3eadd9_d35e_43b1_acf5_88fe04381bf9.slice. Jan 20 06:55:52.851523 systemd[1]: Created slice kubepods-besteffort-pod10f0b151_ea65_4a89_806a_f2fc08df9708.slice - libcontainer container kubepods-besteffort-pod10f0b151_ea65_4a89_806a_f2fc08df9708.slice. Jan 20 06:55:52.858615 systemd[1]: Created slice kubepods-besteffort-pode2f4d2c7_d335_42bb_9262_2a522436304e.slice - libcontainer container kubepods-besteffort-pode2f4d2c7_d335_42bb_9262_2a522436304e.slice. Jan 20 06:55:53.366227 kubelet[2906]: I0120 06:55:52.924984 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnkcv\" (UniqueName: \"kubernetes.io/projected/52f9dd62-5a28-4d34-8a7e-35c040c0ecfe-kube-api-access-gnkcv\") pod \"calico-apiserver-597cf6c9f4-c4gmt\" (UID: \"52f9dd62-5a28-4d34-8a7e-35c040c0ecfe\") " pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" Jan 20 06:55:53.366227 kubelet[2906]: I0120 06:55:52.925019 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62cdcd04-03ba-4af6-a274-3cc1be14a458-config-volume\") pod \"coredns-668d6bf9bc-g5pgm\" (UID: \"62cdcd04-03ba-4af6-a274-3cc1be14a458\") " pod="kube-system/coredns-668d6bf9bc-g5pgm" Jan 20 06:55:53.366227 kubelet[2906]: I0120 06:55:52.925055 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkgj4\" (UniqueName: \"kubernetes.io/projected/de3eadd9-d35e-43b1-acf5-88fe04381bf9-kube-api-access-fkgj4\") pod \"calico-apiserver-597cf6c9f4-jx5cs\" (UID: \"de3eadd9-d35e-43b1-acf5-88fe04381bf9\") " pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" Jan 20 06:55:53.366227 kubelet[2906]: I0120 06:55:52.925072 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnsz5\" (UniqueName: \"kubernetes.io/projected/c83e0ef7-8277-4d9b-af20-88f781d4eb21-kube-api-access-nnsz5\") pod \"coredns-668d6bf9bc-dkxm9\" (UID: \"c83e0ef7-8277-4d9b-af20-88f781d4eb21\") " pod="kube-system/coredns-668d6bf9bc-dkxm9" Jan 20 06:55:53.366227 kubelet[2906]: I0120 06:55:52.925089 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htbc8\" (UniqueName: \"kubernetes.io/projected/e2f4d2c7-d335-42bb-9262-2a522436304e-kube-api-access-htbc8\") pod \"calico-kube-controllers-676fb446dd-r6rmf\" (UID: \"e2f4d2c7-d335-42bb-9262-2a522436304e\") " pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" Jan 20 06:55:52.864552 systemd[1]: Created slice kubepods-besteffort-pod52f9dd62_5a28_4d34_8a7e_35c040c0ecfe.slice - libcontainer container kubepods-besteffort-pod52f9dd62_5a28_4d34_8a7e_35c040c0ecfe.slice. Jan 20 06:55:53.366659 kubelet[2906]: I0120 06:55:52.925105 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10f0b151-ea65-4a89-806a-f2fc08df9708-whisker-backend-key-pair\") pod \"whisker-77547bfd8c-4v5dx\" (UID: \"10f0b151-ea65-4a89-806a-f2fc08df9708\") " pod="calico-system/whisker-77547bfd8c-4v5dx" Jan 20 06:55:53.366659 kubelet[2906]: I0120 06:55:52.925121 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f79bfe2-fd9a-4aff-ac23-745eeb4426b7-config\") pod \"goldmane-666569f655-8j6t9\" (UID: \"3f79bfe2-fd9a-4aff-ac23-745eeb4426b7\") " pod="calico-system/goldmane-666569f655-8j6t9" Jan 20 06:55:53.366659 kubelet[2906]: I0120 06:55:52.925136 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khrnn\" (UniqueName: \"kubernetes.io/projected/3f79bfe2-fd9a-4aff-ac23-745eeb4426b7-kube-api-access-khrnn\") pod \"goldmane-666569f655-8j6t9\" (UID: \"3f79bfe2-fd9a-4aff-ac23-745eeb4426b7\") " pod="calico-system/goldmane-666569f655-8j6t9" Jan 20 06:55:53.366659 kubelet[2906]: I0120 06:55:52.925151 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2f4d2c7-d335-42bb-9262-2a522436304e-tigera-ca-bundle\") pod \"calico-kube-controllers-676fb446dd-r6rmf\" (UID: \"e2f4d2c7-d335-42bb-9262-2a522436304e\") " pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" Jan 20 06:55:53.366659 kubelet[2906]: I0120 06:55:52.925165 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7xs\" (UniqueName: \"kubernetes.io/projected/10f0b151-ea65-4a89-806a-f2fc08df9708-kube-api-access-4c7xs\") pod \"whisker-77547bfd8c-4v5dx\" (UID: \"10f0b151-ea65-4a89-806a-f2fc08df9708\") " pod="calico-system/whisker-77547bfd8c-4v5dx" Jan 20 06:55:52.871656 systemd[1]: Created slice kubepods-besteffort-pod3f79bfe2_fd9a_4aff_ac23_745eeb4426b7.slice - libcontainer container kubepods-besteffort-pod3f79bfe2_fd9a_4aff_ac23_745eeb4426b7.slice. Jan 20 06:55:53.366816 kubelet[2906]: I0120 06:55:52.925183 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de3eadd9-d35e-43b1-acf5-88fe04381bf9-calico-apiserver-certs\") pod \"calico-apiserver-597cf6c9f4-jx5cs\" (UID: \"de3eadd9-d35e-43b1-acf5-88fe04381bf9\") " pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" Jan 20 06:55:53.366816 kubelet[2906]: I0120 06:55:52.925217 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c83e0ef7-8277-4d9b-af20-88f781d4eb21-config-volume\") pod \"coredns-668d6bf9bc-dkxm9\" (UID: \"c83e0ef7-8277-4d9b-af20-88f781d4eb21\") " pod="kube-system/coredns-668d6bf9bc-dkxm9" Jan 20 06:55:53.366816 kubelet[2906]: I0120 06:55:52.925235 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f79bfe2-fd9a-4aff-ac23-745eeb4426b7-goldmane-ca-bundle\") pod \"goldmane-666569f655-8j6t9\" (UID: \"3f79bfe2-fd9a-4aff-ac23-745eeb4426b7\") " pod="calico-system/goldmane-666569f655-8j6t9" Jan 20 06:55:53.366816 kubelet[2906]: I0120 06:55:52.925249 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f0b151-ea65-4a89-806a-f2fc08df9708-whisker-ca-bundle\") pod \"whisker-77547bfd8c-4v5dx\" (UID: \"10f0b151-ea65-4a89-806a-f2fc08df9708\") " pod="calico-system/whisker-77547bfd8c-4v5dx" Jan 20 06:55:53.366816 kubelet[2906]: I0120 06:55:52.925268 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/52f9dd62-5a28-4d34-8a7e-35c040c0ecfe-calico-apiserver-certs\") pod \"calico-apiserver-597cf6c9f4-c4gmt\" (UID: \"52f9dd62-5a28-4d34-8a7e-35c040c0ecfe\") " pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" Jan 20 06:55:53.370373 kubelet[2906]: I0120 06:55:52.925282 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3f79bfe2-fd9a-4aff-ac23-745eeb4426b7-goldmane-key-pair\") pod \"goldmane-666569f655-8j6t9\" (UID: \"3f79bfe2-fd9a-4aff-ac23-745eeb4426b7\") " pod="calico-system/goldmane-666569f655-8j6t9" Jan 20 06:55:53.370373 kubelet[2906]: I0120 06:55:52.925297 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lxzk\" (UniqueName: \"kubernetes.io/projected/62cdcd04-03ba-4af6-a274-3cc1be14a458-kube-api-access-5lxzk\") pod \"coredns-668d6bf9bc-g5pgm\" (UID: \"62cdcd04-03ba-4af6-a274-3cc1be14a458\") " pod="kube-system/coredns-668d6bf9bc-g5pgm" Jan 20 06:55:54.214988 systemd[1]: Created slice kubepods-besteffort-pod506bd27e_3197_4d34_a858_e04017d318df.slice - libcontainer container kubepods-besteffort-pod506bd27e_3197_4d34_a858_e04017d318df.slice. Jan 20 06:55:54.217451 containerd[1680]: time="2026-01-20T06:55:54.217364113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8w7k,Uid:506bd27e-3197-4d34-a858-e04017d318df,Namespace:calico-system,Attempt:0,}" Jan 20 06:55:54.270370 containerd[1680]: time="2026-01-20T06:55:54.270311363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77547bfd8c-4v5dx,Uid:10f0b151-ea65-4a89-806a-f2fc08df9708,Namespace:calico-system,Attempt:0,}" Jan 20 06:55:54.278097 containerd[1680]: time="2026-01-20T06:55:54.277991936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-676fb446dd-r6rmf,Uid:e2f4d2c7-d335-42bb-9262-2a522436304e,Namespace:calico-system,Attempt:0,}" Jan 20 06:55:54.278369 containerd[1680]: time="2026-01-20T06:55:54.278338516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8j6t9,Uid:3f79bfe2-fd9a-4aff-ac23-745eeb4426b7,Namespace:calico-system,Attempt:0,}" Jan 20 06:55:54.294138 containerd[1680]: time="2026-01-20T06:55:54.294102206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-c4gmt,Uid:52f9dd62-5a28-4d34-8a7e-35c040c0ecfe,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:55:54.326031 containerd[1680]: time="2026-01-20T06:55:54.325975470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5pgm,Uid:62cdcd04-03ba-4af6-a274-3cc1be14a458,Namespace:kube-system,Attempt:0,}" Jan 20 06:55:54.334257 containerd[1680]: time="2026-01-20T06:55:54.334206727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dkxm9,Uid:c83e0ef7-8277-4d9b-af20-88f781d4eb21,Namespace:kube-system,Attempt:0,}" Jan 20 06:55:54.346893 containerd[1680]: time="2026-01-20T06:55:54.346858735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-jx5cs,Uid:de3eadd9-d35e-43b1-acf5-88fe04381bf9,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:55:54.595608 containerd[1680]: time="2026-01-20T06:55:54.595415098Z" level=error msg="Failed to destroy network for sandbox \"dfde2c2028d55e4a70936de17041bcc8da5480656146994ff9d4f707173c1c1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.599245 containerd[1680]: time="2026-01-20T06:55:54.598944091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8w7k,Uid:506bd27e-3197-4d34-a858-e04017d318df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfde2c2028d55e4a70936de17041bcc8da5480656146994ff9d4f707173c1c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.599714 kubelet[2906]: E0120 06:55:54.599257 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfde2c2028d55e4a70936de17041bcc8da5480656146994ff9d4f707173c1c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.599714 kubelet[2906]: E0120 06:55:54.599519 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfde2c2028d55e4a70936de17041bcc8da5480656146994ff9d4f707173c1c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8w7k" Jan 20 06:55:54.599714 kubelet[2906]: E0120 06:55:54.599540 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfde2c2028d55e4a70936de17041bcc8da5480656146994ff9d4f707173c1c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8w7k" Jan 20 06:55:54.600226 kubelet[2906]: E0120 06:55:54.599579 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfde2c2028d55e4a70936de17041bcc8da5480656146994ff9d4f707173c1c1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:55:54.611395 containerd[1680]: time="2026-01-20T06:55:54.611250773Z" level=error msg="Failed to destroy network for sandbox \"245506a61457f88920e49d0a36dde82011cc6e8a512aa1e010b54babc7337d9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.617948 containerd[1680]: time="2026-01-20T06:55:54.617903400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8j6t9,Uid:3f79bfe2-fd9a-4aff-ac23-745eeb4426b7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"245506a61457f88920e49d0a36dde82011cc6e8a512aa1e010b54babc7337d9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.618340 kubelet[2906]: E0120 06:55:54.618294 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245506a61457f88920e49d0a36dde82011cc6e8a512aa1e010b54babc7337d9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.618401 kubelet[2906]: E0120 06:55:54.618353 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245506a61457f88920e49d0a36dde82011cc6e8a512aa1e010b54babc7337d9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8j6t9" Jan 20 06:55:54.618401 kubelet[2906]: E0120 06:55:54.618373 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245506a61457f88920e49d0a36dde82011cc6e8a512aa1e010b54babc7337d9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8j6t9" Jan 20 06:55:54.618543 kubelet[2906]: E0120 06:55:54.618411 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8j6t9_calico-system(3f79bfe2-fd9a-4aff-ac23-745eeb4426b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8j6t9_calico-system(3f79bfe2-fd9a-4aff-ac23-745eeb4426b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"245506a61457f88920e49d0a36dde82011cc6e8a512aa1e010b54babc7337d9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:55:54.635258 containerd[1680]: time="2026-01-20T06:55:54.635217886Z" level=error msg="Failed to destroy network for sandbox \"49aa1f354a67b5ae7118680134c03264f8a3445010e063b3639d6c86f7a028c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.641438 containerd[1680]: time="2026-01-20T06:55:54.641392276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77547bfd8c-4v5dx,Uid:10f0b151-ea65-4a89-806a-f2fc08df9708,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49aa1f354a67b5ae7118680134c03264f8a3445010e063b3639d6c86f7a028c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.641733 kubelet[2906]: E0120 06:55:54.641684 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49aa1f354a67b5ae7118680134c03264f8a3445010e063b3639d6c86f7a028c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.641849 kubelet[2906]: E0120 06:55:54.641736 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49aa1f354a67b5ae7118680134c03264f8a3445010e063b3639d6c86f7a028c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77547bfd8c-4v5dx" Jan 20 06:55:54.641849 kubelet[2906]: E0120 06:55:54.641758 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49aa1f354a67b5ae7118680134c03264f8a3445010e063b3639d6c86f7a028c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77547bfd8c-4v5dx" Jan 20 06:55:54.641849 kubelet[2906]: E0120 06:55:54.641795 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77547bfd8c-4v5dx_calico-system(10f0b151-ea65-4a89-806a-f2fc08df9708)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77547bfd8c-4v5dx_calico-system(10f0b151-ea65-4a89-806a-f2fc08df9708)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49aa1f354a67b5ae7118680134c03264f8a3445010e063b3639d6c86f7a028c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77547bfd8c-4v5dx" podUID="10f0b151-ea65-4a89-806a-f2fc08df9708" Jan 20 06:55:54.684990 containerd[1680]: time="2026-01-20T06:55:54.684888271Z" level=error msg="Failed to destroy network for sandbox \"37155becb76d7c0077943df2f2f3b2a132ec243fa3a5baec5b2dd40775afa3e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.687145 containerd[1680]: time="2026-01-20T06:55:54.687101063Z" level=error msg="Failed to destroy network for sandbox \"8b1813505c4065e648d1c4a9f101bbda2bb9b4f24409a12b7649f621c6599e3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.687615 containerd[1680]: time="2026-01-20T06:55:54.687541650Z" level=error msg="Failed to destroy network for sandbox \"8321ff80d7ce77f67bea0ae313225e62b59398615ac22a7c2b7007ff43884b18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.688639 containerd[1680]: time="2026-01-20T06:55:54.688613688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dkxm9,Uid:c83e0ef7-8277-4d9b-af20-88f781d4eb21,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37155becb76d7c0077943df2f2f3b2a132ec243fa3a5baec5b2dd40775afa3e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.690081 kubelet[2906]: E0120 06:55:54.690051 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37155becb76d7c0077943df2f2f3b2a132ec243fa3a5baec5b2dd40775afa3e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.690558 kubelet[2906]: E0120 06:55:54.690364 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37155becb76d7c0077943df2f2f3b2a132ec243fa3a5baec5b2dd40775afa3e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dkxm9" Jan 20 06:55:54.690558 kubelet[2906]: E0120 06:55:54.690387 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37155becb76d7c0077943df2f2f3b2a132ec243fa3a5baec5b2dd40775afa3e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dkxm9" Jan 20 06:55:54.690558 kubelet[2906]: E0120 06:55:54.690445 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dkxm9_kube-system(c83e0ef7-8277-4d9b-af20-88f781d4eb21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dkxm9_kube-system(c83e0ef7-8277-4d9b-af20-88f781d4eb21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37155becb76d7c0077943df2f2f3b2a132ec243fa3a5baec5b2dd40775afa3e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dkxm9" podUID="c83e0ef7-8277-4d9b-af20-88f781d4eb21" Jan 20 06:55:54.691492 containerd[1680]: time="2026-01-20T06:55:54.691467315Z" level=error msg="Failed to destroy network for sandbox \"fe0f14847ba2da13554594554b83f4f255c34d30235f3456ef26390376cbb3ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.696845 containerd[1680]: time="2026-01-20T06:55:54.696773169Z" level=error msg="Failed to destroy network for sandbox \"ebf311c0e076ced1a92e0ed49180ae830cf098e0dd7412ad30f223387690adee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.701768 containerd[1680]: time="2026-01-20T06:55:54.701725783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5pgm,Uid:62cdcd04-03ba-4af6-a274-3cc1be14a458,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe0f14847ba2da13554594554b83f4f255c34d30235f3456ef26390376cbb3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.702039 kubelet[2906]: E0120 06:55:54.702003 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe0f14847ba2da13554594554b83f4f255c34d30235f3456ef26390376cbb3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.702096 kubelet[2906]: E0120 06:55:54.702053 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe0f14847ba2da13554594554b83f4f255c34d30235f3456ef26390376cbb3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g5pgm" Jan 20 06:55:54.702096 kubelet[2906]: E0120 06:55:54.702073 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe0f14847ba2da13554594554b83f4f255c34d30235f3456ef26390376cbb3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g5pgm" Jan 20 06:55:54.702148 kubelet[2906]: E0120 06:55:54.702113 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g5pgm_kube-system(62cdcd04-03ba-4af6-a274-3cc1be14a458)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g5pgm_kube-system(62cdcd04-03ba-4af6-a274-3cc1be14a458)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe0f14847ba2da13554594554b83f4f255c34d30235f3456ef26390376cbb3ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g5pgm" podUID="62cdcd04-03ba-4af6-a274-3cc1be14a458" Jan 20 06:55:54.703209 containerd[1680]: time="2026-01-20T06:55:54.703153312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-jx5cs,Uid:de3eadd9-d35e-43b1-acf5-88fe04381bf9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1813505c4065e648d1c4a9f101bbda2bb9b4f24409a12b7649f621c6599e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.703392 kubelet[2906]: E0120 06:55:54.703361 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1813505c4065e648d1c4a9f101bbda2bb9b4f24409a12b7649f621c6599e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.703487 kubelet[2906]: E0120 06:55:54.703401 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1813505c4065e648d1c4a9f101bbda2bb9b4f24409a12b7649f621c6599e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" Jan 20 06:55:54.703487 kubelet[2906]: E0120 06:55:54.703418 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1813505c4065e648d1c4a9f101bbda2bb9b4f24409a12b7649f621c6599e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" Jan 20 06:55:54.703487 kubelet[2906]: E0120 06:55:54.703448 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-597cf6c9f4-jx5cs_calico-apiserver(de3eadd9-d35e-43b1-acf5-88fe04381bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-597cf6c9f4-jx5cs_calico-apiserver(de3eadd9-d35e-43b1-acf5-88fe04381bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b1813505c4065e648d1c4a9f101bbda2bb9b4f24409a12b7649f621c6599e3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:55:54.704724 containerd[1680]: time="2026-01-20T06:55:54.704653041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-676fb446dd-r6rmf,Uid:e2f4d2c7-d335-42bb-9262-2a522436304e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8321ff80d7ce77f67bea0ae313225e62b59398615ac22a7c2b7007ff43884b18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.704922 kubelet[2906]: E0120 06:55:54.704893 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8321ff80d7ce77f67bea0ae313225e62b59398615ac22a7c2b7007ff43884b18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.705027 kubelet[2906]: E0120 06:55:54.705008 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8321ff80d7ce77f67bea0ae313225e62b59398615ac22a7c2b7007ff43884b18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" Jan 20 06:55:54.705127 kubelet[2906]: E0120 06:55:54.705068 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8321ff80d7ce77f67bea0ae313225e62b59398615ac22a7c2b7007ff43884b18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" Jan 20 06:55:54.705127 kubelet[2906]: E0120 06:55:54.705101 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-676fb446dd-r6rmf_calico-system(e2f4d2c7-d335-42bb-9262-2a522436304e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-676fb446dd-r6rmf_calico-system(e2f4d2c7-d335-42bb-9262-2a522436304e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8321ff80d7ce77f67bea0ae313225e62b59398615ac22a7c2b7007ff43884b18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:55:54.709124 containerd[1680]: time="2026-01-20T06:55:54.709038689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-c4gmt,Uid:52f9dd62-5a28-4d34-8a7e-35c040c0ecfe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebf311c0e076ced1a92e0ed49180ae830cf098e0dd7412ad30f223387690adee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.709486 kubelet[2906]: E0120 06:55:54.709463 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebf311c0e076ced1a92e0ed49180ae830cf098e0dd7412ad30f223387690adee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:55:54.709546 kubelet[2906]: E0120 06:55:54.709499 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebf311c0e076ced1a92e0ed49180ae830cf098e0dd7412ad30f223387690adee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" Jan 20 06:55:54.709546 kubelet[2906]: E0120 06:55:54.709534 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebf311c0e076ced1a92e0ed49180ae830cf098e0dd7412ad30f223387690adee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" Jan 20 06:55:54.709607 kubelet[2906]: E0120 06:55:54.709566 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-597cf6c9f4-c4gmt_calico-apiserver(52f9dd62-5a28-4d34-8a7e-35c040c0ecfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-597cf6c9f4-c4gmt_calico-apiserver(52f9dd62-5a28-4d34-8a7e-35c040c0ecfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebf311c0e076ced1a92e0ed49180ae830cf098e0dd7412ad30f223387690adee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:56:02.719982 containerd[1680]: time="2026-01-20T06:56:02.719872818Z" level=error msg="failed to handle container TaskExit event container_id:\"e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981\" id:\"e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981\" pid:3664 exited_at:{seconds:1768892152 nanos:718570018}" error="failed to stop container: failed to delete task: context deadline exceeded" Jan 20 06:56:03.804599 containerd[1680]: time="2026-01-20T06:56:03.804431699Z" level=info msg="TaskExit event container_id:\"e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981\" id:\"e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981\" pid:3664 exited_at:{seconds:1768892152 nanos:718570018}" Jan 20 06:56:05.804839 containerd[1680]: time="2026-01-20T06:56:05.804769529Z" level=error msg="get state for e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981" error="context deadline exceeded" Jan 20 06:56:05.804839 containerd[1680]: time="2026-01-20T06:56:05.804816113Z" level=warning msg="unknown status" status=0 Jan 20 06:56:06.210153 containerd[1680]: time="2026-01-20T06:56:06.210098276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dkxm9,Uid:c83e0ef7-8277-4d9b-af20-88f781d4eb21,Namespace:kube-system,Attempt:0,}" Jan 20 06:56:06.259162 containerd[1680]: time="2026-01-20T06:56:06.259119338Z" level=error msg="Failed to destroy network for sandbox \"a141e6f76c65d4fb3a1330da80adc7c5690c828366b2119bdb39fbe8c42926a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:06.261268 systemd[1]: run-netns-cni\x2dd20de93f\x2d9b2a\x2d8a31\x2d3d5d\x2dd55723cff8e8.mount: Deactivated successfully. Jan 20 06:56:06.262736 containerd[1680]: time="2026-01-20T06:56:06.262693297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dkxm9,Uid:c83e0ef7-8277-4d9b-af20-88f781d4eb21,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a141e6f76c65d4fb3a1330da80adc7c5690c828366b2119bdb39fbe8c42926a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:06.262965 kubelet[2906]: E0120 06:56:06.262929 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a141e6f76c65d4fb3a1330da80adc7c5690c828366b2119bdb39fbe8c42926a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:06.263419 kubelet[2906]: E0120 06:56:06.262982 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a141e6f76c65d4fb3a1330da80adc7c5690c828366b2119bdb39fbe8c42926a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dkxm9" Jan 20 06:56:06.263419 kubelet[2906]: E0120 06:56:06.263000 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a141e6f76c65d4fb3a1330da80adc7c5690c828366b2119bdb39fbe8c42926a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dkxm9" Jan 20 06:56:06.263419 kubelet[2906]: E0120 06:56:06.263039 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dkxm9_kube-system(c83e0ef7-8277-4d9b-af20-88f781d4eb21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dkxm9_kube-system(c83e0ef7-8277-4d9b-af20-88f781d4eb21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a141e6f76c65d4fb3a1330da80adc7c5690c828366b2119bdb39fbe8c42926a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dkxm9" podUID="c83e0ef7-8277-4d9b-af20-88f781d4eb21" Jan 20 06:56:07.214184 containerd[1680]: time="2026-01-20T06:56:07.213961339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-c4gmt,Uid:52f9dd62-5a28-4d34-8a7e-35c040c0ecfe,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:56:07.214637 containerd[1680]: time="2026-01-20T06:56:07.214540997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5pgm,Uid:62cdcd04-03ba-4af6-a274-3cc1be14a458,Namespace:kube-system,Attempt:0,}" Jan 20 06:56:07.215377 containerd[1680]: time="2026-01-20T06:56:07.215346310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8j6t9,Uid:3f79bfe2-fd9a-4aff-ac23-745eeb4426b7,Namespace:calico-system,Attempt:0,}" Jan 20 06:56:07.305891 containerd[1680]: time="2026-01-20T06:56:07.305433663Z" level=error msg="Failed to destroy network for sandbox \"ff1b47d373c567e7a47305616470dd3978d68cfcadfac2d67308b3d31c88a471\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:07.308977 systemd[1]: run-netns-cni\x2d6e8781b3\x2ded11\x2de88d\x2d4197\x2d610fcf9039c0.mount: Deactivated successfully. Jan 20 06:56:07.314476 containerd[1680]: time="2026-01-20T06:56:07.314432567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-c4gmt,Uid:52f9dd62-5a28-4d34-8a7e-35c040c0ecfe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1b47d373c567e7a47305616470dd3978d68cfcadfac2d67308b3d31c88a471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:07.315209 kubelet[2906]: E0120 06:56:07.314925 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1b47d373c567e7a47305616470dd3978d68cfcadfac2d67308b3d31c88a471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:07.315209 kubelet[2906]: E0120 06:56:07.314977 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1b47d373c567e7a47305616470dd3978d68cfcadfac2d67308b3d31c88a471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" Jan 20 06:56:07.315209 kubelet[2906]: E0120 06:56:07.314997 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1b47d373c567e7a47305616470dd3978d68cfcadfac2d67308b3d31c88a471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" Jan 20 06:56:07.315873 kubelet[2906]: E0120 06:56:07.315026 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-597cf6c9f4-c4gmt_calico-apiserver(52f9dd62-5a28-4d34-8a7e-35c040c0ecfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-597cf6c9f4-c4gmt_calico-apiserver(52f9dd62-5a28-4d34-8a7e-35c040c0ecfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff1b47d373c567e7a47305616470dd3978d68cfcadfac2d67308b3d31c88a471\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:56:07.334139 containerd[1680]: time="2026-01-20T06:56:07.334067361Z" level=error msg="Failed to destroy network for sandbox \"92b547d91b5ef21a5d164a2e4e5f814ff0adace78467f68efd62115b0306c634\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:07.337985 systemd[1]: run-netns-cni\x2d6c880a2d\x2db97c\x2dc63c\x2dd43d\x2d2b21e7184d39.mount: Deactivated successfully. Jan 20 06:56:07.342741 containerd[1680]: time="2026-01-20T06:56:07.342703293Z" level=error msg="Failed to destroy network for sandbox \"386a9b6f48340fb82cfb3d54c01578a396d2a2dcbd71c4470694fe7c1aa45557\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:07.343354 containerd[1680]: time="2026-01-20T06:56:07.343281919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8j6t9,Uid:3f79bfe2-fd9a-4aff-ac23-745eeb4426b7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92b547d91b5ef21a5d164a2e4e5f814ff0adace78467f68efd62115b0306c634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:07.343632 kubelet[2906]: E0120 06:56:07.343606 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92b547d91b5ef21a5d164a2e4e5f814ff0adace78467f68efd62115b0306c634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:07.343749 kubelet[2906]: E0120 06:56:07.343736 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92b547d91b5ef21a5d164a2e4e5f814ff0adace78467f68efd62115b0306c634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8j6t9" Jan 20 06:56:07.343892 kubelet[2906]: E0120 06:56:07.343773 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92b547d91b5ef21a5d164a2e4e5f814ff0adace78467f68efd62115b0306c634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8j6t9" Jan 20 06:56:07.343950 kubelet[2906]: E0120 06:56:07.343815 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8j6t9_calico-system(3f79bfe2-fd9a-4aff-ac23-745eeb4426b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8j6t9_calico-system(3f79bfe2-fd9a-4aff-ac23-745eeb4426b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92b547d91b5ef21a5d164a2e4e5f814ff0adace78467f68efd62115b0306c634\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:56:07.346117 containerd[1680]: time="2026-01-20T06:56:07.346083927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5pgm,Uid:62cdcd04-03ba-4af6-a274-3cc1be14a458,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"386a9b6f48340fb82cfb3d54c01578a396d2a2dcbd71c4470694fe7c1aa45557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:07.346340 kubelet[2906]: E0120 06:56:07.346228 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"386a9b6f48340fb82cfb3d54c01578a396d2a2dcbd71c4470694fe7c1aa45557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:07.346340 kubelet[2906]: E0120 06:56:07.346258 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"386a9b6f48340fb82cfb3d54c01578a396d2a2dcbd71c4470694fe7c1aa45557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g5pgm" Jan 20 06:56:07.346340 kubelet[2906]: E0120 06:56:07.346276 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"386a9b6f48340fb82cfb3d54c01578a396d2a2dcbd71c4470694fe7c1aa45557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g5pgm" Jan 20 06:56:07.346509 kubelet[2906]: E0120 06:56:07.346460 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g5pgm_kube-system(62cdcd04-03ba-4af6-a274-3cc1be14a458)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g5pgm_kube-system(62cdcd04-03ba-4af6-a274-3cc1be14a458)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"386a9b6f48340fb82cfb3d54c01578a396d2a2dcbd71c4470694fe7c1aa45557\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g5pgm" podUID="62cdcd04-03ba-4af6-a274-3cc1be14a458" Jan 20 06:56:07.806087 containerd[1680]: time="2026-01-20T06:56:07.805898603Z" level=error msg="get state for e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981" error="context deadline exceeded" Jan 20 06:56:07.806087 containerd[1680]: time="2026-01-20T06:56:07.805941239Z" level=warning msg="unknown status" status=0 Jan 20 06:56:08.209709 containerd[1680]: time="2026-01-20T06:56:08.209516218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77547bfd8c-4v5dx,Uid:10f0b151-ea65-4a89-806a-f2fc08df9708,Namespace:calico-system,Attempt:0,}" Jan 20 06:56:08.210169 containerd[1680]: time="2026-01-20T06:56:08.210150971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-676fb446dd-r6rmf,Uid:e2f4d2c7-d335-42bb-9262-2a522436304e,Namespace:calico-system,Attempt:0,}" Jan 20 06:56:08.226434 systemd[1]: run-netns-cni\x2db86507fb\x2d3147\x2dd44e\x2daece\x2d2bab99145fce.mount: Deactivated successfully. Jan 20 06:56:08.279248 containerd[1680]: time="2026-01-20T06:56:08.279206664Z" level=error msg="Failed to destroy network for sandbox \"fd493a6bb8d5a710b9e7cd38cd67da27b32c433b77718a60865aaf6298718d38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:08.281237 systemd[1]: run-netns-cni\x2dc4b0e97e\x2ddcfb\x2d682d\x2d4177\x2ddf14d0221401.mount: Deactivated successfully. Jan 20 06:56:08.284944 containerd[1680]: time="2026-01-20T06:56:08.284912277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77547bfd8c-4v5dx,Uid:10f0b151-ea65-4a89-806a-f2fc08df9708,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd493a6bb8d5a710b9e7cd38cd67da27b32c433b77718a60865aaf6298718d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:08.286439 kubelet[2906]: E0120 06:56:08.285224 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd493a6bb8d5a710b9e7cd38cd67da27b32c433b77718a60865aaf6298718d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:08.286439 kubelet[2906]: E0120 06:56:08.285279 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd493a6bb8d5a710b9e7cd38cd67da27b32c433b77718a60865aaf6298718d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77547bfd8c-4v5dx" Jan 20 06:56:08.286439 kubelet[2906]: E0120 06:56:08.285298 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd493a6bb8d5a710b9e7cd38cd67da27b32c433b77718a60865aaf6298718d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77547bfd8c-4v5dx" Jan 20 06:56:08.286588 kubelet[2906]: E0120 06:56:08.285337 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77547bfd8c-4v5dx_calico-system(10f0b151-ea65-4a89-806a-f2fc08df9708)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77547bfd8c-4v5dx_calico-system(10f0b151-ea65-4a89-806a-f2fc08df9708)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd493a6bb8d5a710b9e7cd38cd67da27b32c433b77718a60865aaf6298718d38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77547bfd8c-4v5dx" podUID="10f0b151-ea65-4a89-806a-f2fc08df9708" Jan 20 06:56:08.291002 containerd[1680]: time="2026-01-20T06:56:08.290966886Z" level=error msg="Failed to destroy network for sandbox \"420255004270bce4facb001052291c539f195002d3d2331cfa5974eb2179d591\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:08.294181 systemd[1]: run-netns-cni\x2d309844ee\x2db573\x2d137f\x2d9af6\x2d21a13b79ca0f.mount: Deactivated successfully. Jan 20 06:56:08.295792 containerd[1680]: time="2026-01-20T06:56:08.295761371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-676fb446dd-r6rmf,Uid:e2f4d2c7-d335-42bb-9262-2a522436304e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"420255004270bce4facb001052291c539f195002d3d2331cfa5974eb2179d591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:08.296362 kubelet[2906]: E0120 06:56:08.296092 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420255004270bce4facb001052291c539f195002d3d2331cfa5974eb2179d591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:08.296362 kubelet[2906]: E0120 06:56:08.296132 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420255004270bce4facb001052291c539f195002d3d2331cfa5974eb2179d591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" Jan 20 06:56:08.296362 kubelet[2906]: E0120 06:56:08.296155 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420255004270bce4facb001052291c539f195002d3d2331cfa5974eb2179d591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" Jan 20 06:56:08.296465 kubelet[2906]: E0120 06:56:08.296193 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-676fb446dd-r6rmf_calico-system(e2f4d2c7-d335-42bb-9262-2a522436304e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-676fb446dd-r6rmf_calico-system(e2f4d2c7-d335-42bb-9262-2a522436304e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"420255004270bce4facb001052291c539f195002d3d2331cfa5974eb2179d591\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:56:09.035223 containerd[1680]: time="2026-01-20T06:56:09.035055180Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Jan 20 06:56:09.035223 containerd[1680]: time="2026-01-20T06:56:09.035099219Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Jan 20 06:56:09.035223 containerd[1680]: time="2026-01-20T06:56:09.035109558Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Jan 20 06:56:09.036443 containerd[1680]: time="2026-01-20T06:56:09.036330906Z" level=info msg="Ensure that container e41ba6170d0a98e0d668494e2d039961bd1bedb44b57a2052b48712189333981 in task-service has been cleanup successfully" Jan 20 06:56:09.210576 containerd[1680]: time="2026-01-20T06:56:09.210389574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-jx5cs,Uid:de3eadd9-d35e-43b1-acf5-88fe04381bf9,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:56:09.275253 containerd[1680]: time="2026-01-20T06:56:09.275209074Z" level=error msg="Failed to destroy network for sandbox \"566e6b8dfcdb19358e8c7e7926cd765ccd59576d92554442646002d970ff2294\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:09.276978 systemd[1]: run-netns-cni\x2d813cb313\x2d4d64\x2d2045\x2dd089\x2d3418049e5645.mount: Deactivated successfully. Jan 20 06:56:09.279041 containerd[1680]: time="2026-01-20T06:56:09.278988546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-jx5cs,Uid:de3eadd9-d35e-43b1-acf5-88fe04381bf9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"566e6b8dfcdb19358e8c7e7926cd765ccd59576d92554442646002d970ff2294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:09.279403 kubelet[2906]: E0120 06:56:09.279279 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"566e6b8dfcdb19358e8c7e7926cd765ccd59576d92554442646002d970ff2294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:09.279403 kubelet[2906]: E0120 06:56:09.279375 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"566e6b8dfcdb19358e8c7e7926cd765ccd59576d92554442646002d970ff2294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" Jan 20 06:56:09.279903 kubelet[2906]: E0120 06:56:09.279660 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"566e6b8dfcdb19358e8c7e7926cd765ccd59576d92554442646002d970ff2294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" Jan 20 06:56:09.279903 kubelet[2906]: E0120 06:56:09.279710 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-597cf6c9f4-jx5cs_calico-apiserver(de3eadd9-d35e-43b1-acf5-88fe04381bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-597cf6c9f4-jx5cs_calico-apiserver(de3eadd9-d35e-43b1-acf5-88fe04381bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"566e6b8dfcdb19358e8c7e7926cd765ccd59576d92554442646002d970ff2294\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:56:09.373517 containerd[1680]: time="2026-01-20T06:56:09.372418734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 06:56:10.210011 containerd[1680]: time="2026-01-20T06:56:10.209818154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8w7k,Uid:506bd27e-3197-4d34-a858-e04017d318df,Namespace:calico-system,Attempt:0,}" Jan 20 06:56:10.261606 containerd[1680]: time="2026-01-20T06:56:10.261536356Z" level=error msg="Failed to destroy network for sandbox \"84957e51cc4b3303fe590f0a48ad0ba7d842cadcc86f4c2f9069417fab5c2e2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:10.263445 systemd[1]: run-netns-cni\x2d5f49b1ed\x2dbc63\x2de3e1\x2df229\x2d327030378b45.mount: Deactivated successfully. Jan 20 06:56:10.270265 containerd[1680]: time="2026-01-20T06:56:10.270217509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8w7k,Uid:506bd27e-3197-4d34-a858-e04017d318df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84957e51cc4b3303fe590f0a48ad0ba7d842cadcc86f4c2f9069417fab5c2e2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:10.270902 kubelet[2906]: E0120 06:56:10.270466 2906 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84957e51cc4b3303fe590f0a48ad0ba7d842cadcc86f4c2f9069417fab5c2e2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:56:10.270902 kubelet[2906]: E0120 06:56:10.270527 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84957e51cc4b3303fe590f0a48ad0ba7d842cadcc86f4c2f9069417fab5c2e2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8w7k" Jan 20 06:56:10.270902 kubelet[2906]: E0120 06:56:10.270545 2906 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84957e51cc4b3303fe590f0a48ad0ba7d842cadcc86f4c2f9069417fab5c2e2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8w7k" Jan 20 06:56:10.271009 kubelet[2906]: E0120 06:56:10.270591 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84957e51cc4b3303fe590f0a48ad0ba7d842cadcc86f4c2f9069417fab5c2e2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:56:17.160086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886115818.mount: Deactivated successfully. Jan 20 06:56:17.383775 containerd[1680]: time="2026-01-20T06:56:17.383719299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:56:17.385978 containerd[1680]: time="2026-01-20T06:56:17.385941490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 20 06:56:17.387489 containerd[1680]: time="2026-01-20T06:56:17.387443411Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:56:17.389839 containerd[1680]: time="2026-01-20T06:56:17.389794225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:56:17.390278 containerd[1680]: time="2026-01-20T06:56:17.390254607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.016359702s" Jan 20 06:56:17.390312 containerd[1680]: time="2026-01-20T06:56:17.390279366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 06:56:17.403919 containerd[1680]: time="2026-01-20T06:56:17.403541832Z" level=info msg="CreateContainer within sandbox \"733a7bc860ed730126b1de773d3c7a8e0c832f6e1ff76eba287dca6ac36896f5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 06:56:17.422802 containerd[1680]: time="2026-01-20T06:56:17.421722886Z" level=info msg="Container b8244642b38153116b1bd75977e217f63c673f909c6445f2442d0a32fffd393b: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:56:17.432720 containerd[1680]: time="2026-01-20T06:56:17.432687848Z" level=info msg="CreateContainer within sandbox \"733a7bc860ed730126b1de773d3c7a8e0c832f6e1ff76eba287dca6ac36896f5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b8244642b38153116b1bd75977e217f63c673f909c6445f2442d0a32fffd393b\"" Jan 20 06:56:17.433325 containerd[1680]: time="2026-01-20T06:56:17.433302243Z" level=info msg="StartContainer for \"b8244642b38153116b1bd75977e217f63c673f909c6445f2442d0a32fffd393b\"" Jan 20 06:56:17.435188 containerd[1680]: time="2026-01-20T06:56:17.435164095Z" level=info msg="connecting to shim b8244642b38153116b1bd75977e217f63c673f909c6445f2442d0a32fffd393b" address="unix:///run/containerd/s/0fe5767b72c66f17079ddcdafbe021ab3425f413c17d8d001b36cc3f5969184d" protocol=ttrpc version=3 Jan 20 06:56:17.486969 systemd[1]: Started cri-containerd-b8244642b38153116b1bd75977e217f63c673f909c6445f2442d0a32fffd393b.scope - libcontainer container b8244642b38153116b1bd75977e217f63c673f909c6445f2442d0a32fffd393b. Jan 20 06:56:17.532000 audit: BPF prog-id=172 op=LOAD Jan 20 06:56:17.534515 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 20 06:56:17.534579 kernel: audit: type=1334 audit(1768892177.532:575): prog-id=172 op=LOAD Jan 20 06:56:17.532000 audit[4153]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3408 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:17.538017 kernel: audit: type=1300 audit(1768892177.532:575): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3408 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:17.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323434363432623338313533313136623162643735393737653231 Jan 20 06:56:17.542180 kernel: audit: type=1327 audit(1768892177.532:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323434363432623338313533313136623162643735393737653231 Jan 20 06:56:17.533000 audit: BPF prog-id=173 op=LOAD Jan 20 06:56:17.545176 kernel: audit: type=1334 audit(1768892177.533:576): prog-id=173 op=LOAD Jan 20 06:56:17.533000 audit[4153]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3408 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:17.547600 kernel: audit: type=1300 audit(1768892177.533:576): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3408 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:17.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323434363432623338313533313136623162643735393737653231 Jan 20 06:56:17.551809 kernel: audit: type=1327 audit(1768892177.533:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323434363432623338313533313136623162643735393737653231 Jan 20 06:56:17.533000 audit: BPF prog-id=173 op=UNLOAD Jan 20 06:56:17.555378 kernel: audit: type=1334 audit(1768892177.533:577): prog-id=173 op=UNLOAD Jan 20 06:56:17.555423 kernel: audit: type=1300 audit(1768892177.533:577): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3408 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:17.533000 audit[4153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3408 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:17.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323434363432623338313533313136623162643735393737653231 Jan 20 06:56:17.563461 kernel: audit: type=1327 audit(1768892177.533:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323434363432623338313533313136623162643735393737653231 Jan 20 06:56:17.564903 kernel: audit: type=1334 audit(1768892177.533:578): prog-id=172 op=UNLOAD Jan 20 06:56:17.533000 audit: BPF prog-id=172 op=UNLOAD Jan 20 06:56:17.533000 audit[4153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3408 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:17.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323434363432623338313533313136623162643735393737653231 Jan 20 06:56:17.533000 audit: BPF prog-id=174 op=LOAD Jan 20 06:56:17.533000 audit[4153]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3408 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:17.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323434363432623338313533313136623162643735393737653231 Jan 20 06:56:17.572993 containerd[1680]: time="2026-01-20T06:56:17.572962939Z" level=info msg="StartContainer for \"b8244642b38153116b1bd75977e217f63c673f909c6445f2442d0a32fffd393b\" returns successfully" Jan 20 06:56:17.665454 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 06:56:17.666259 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 06:56:17.883024 kubelet[2906]: I0120 06:56:17.882925 2906 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f0b151-ea65-4a89-806a-f2fc08df9708-whisker-ca-bundle\") pod \"10f0b151-ea65-4a89-806a-f2fc08df9708\" (UID: \"10f0b151-ea65-4a89-806a-f2fc08df9708\") " Jan 20 06:56:17.883981 kubelet[2906]: I0120 06:56:17.883471 2906 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10f0b151-ea65-4a89-806a-f2fc08df9708-whisker-backend-key-pair\") pod \"10f0b151-ea65-4a89-806a-f2fc08df9708\" (UID: \"10f0b151-ea65-4a89-806a-f2fc08df9708\") " Jan 20 06:56:17.883981 kubelet[2906]: I0120 06:56:17.883504 2906 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c7xs\" (UniqueName: \"kubernetes.io/projected/10f0b151-ea65-4a89-806a-f2fc08df9708-kube-api-access-4c7xs\") pod \"10f0b151-ea65-4a89-806a-f2fc08df9708\" (UID: \"10f0b151-ea65-4a89-806a-f2fc08df9708\") " Jan 20 06:56:17.885293 kubelet[2906]: I0120 06:56:17.885012 2906 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10f0b151-ea65-4a89-806a-f2fc08df9708-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "10f0b151-ea65-4a89-806a-f2fc08df9708" (UID: "10f0b151-ea65-4a89-806a-f2fc08df9708"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 06:56:17.888911 kubelet[2906]: I0120 06:56:17.888875 2906 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f0b151-ea65-4a89-806a-f2fc08df9708-kube-api-access-4c7xs" (OuterVolumeSpecName: "kube-api-access-4c7xs") pod "10f0b151-ea65-4a89-806a-f2fc08df9708" (UID: "10f0b151-ea65-4a89-806a-f2fc08df9708"). InnerVolumeSpecName "kube-api-access-4c7xs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 06:56:17.889973 kubelet[2906]: I0120 06:56:17.889953 2906 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f0b151-ea65-4a89-806a-f2fc08df9708-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "10f0b151-ea65-4a89-806a-f2fc08df9708" (UID: "10f0b151-ea65-4a89-806a-f2fc08df9708"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 06:56:17.984485 kubelet[2906]: I0120 06:56:17.984449 2906 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f0b151-ea65-4a89-806a-f2fc08df9708-whisker-ca-bundle\") on node \"ci-4585-0-0-n-f719bce5cf\" DevicePath \"\"" Jan 20 06:56:17.984485 kubelet[2906]: I0120 06:56:17.984481 2906 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10f0b151-ea65-4a89-806a-f2fc08df9708-whisker-backend-key-pair\") on node \"ci-4585-0-0-n-f719bce5cf\" DevicePath \"\"" Jan 20 06:56:17.984485 kubelet[2906]: I0120 06:56:17.984491 2906 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4c7xs\" (UniqueName: \"kubernetes.io/projected/10f0b151-ea65-4a89-806a-f2fc08df9708-kube-api-access-4c7xs\") on node \"ci-4585-0-0-n-f719bce5cf\" DevicePath \"\"" Jan 20 06:56:18.160901 systemd[1]: var-lib-kubelet-pods-10f0b151\x2dea65\x2d4a89\x2d806a\x2df2fc08df9708-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 06:56:18.160993 systemd[1]: var-lib-kubelet-pods-10f0b151\x2dea65\x2d4a89\x2d806a\x2df2fc08df9708-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4c7xs.mount: Deactivated successfully. Jan 20 06:56:18.409901 systemd[1]: Removed slice kubepods-besteffort-pod10f0b151_ea65_4a89_806a_f2fc08df9708.slice - libcontainer container kubepods-besteffort-pod10f0b151_ea65_4a89_806a_f2fc08df9708.slice. Jan 20 06:56:18.421928 kubelet[2906]: I0120 06:56:18.421666 2906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8fx7p" podStartSLOduration=1.50396589 podStartE2EDuration="38.421641548s" podCreationTimestamp="2026-01-20 06:55:40 +0000 UTC" firstStartedPulling="2026-01-20 06:55:40.47322354 +0000 UTC m=+19.384861300" lastFinishedPulling="2026-01-20 06:56:17.390899199 +0000 UTC m=+56.302536958" observedRunningTime="2026-01-20 06:56:18.421638175 +0000 UTC m=+57.333275958" watchObservedRunningTime="2026-01-20 06:56:18.421641548 +0000 UTC m=+57.333279309" Jan 20 06:56:18.493302 systemd[1]: Created slice kubepods-besteffort-podeb945f54_1cfd_4248_85ea_34b880e5b4b5.slice - libcontainer container kubepods-besteffort-podeb945f54_1cfd_4248_85ea_34b880e5b4b5.slice. Jan 20 06:56:18.588628 kubelet[2906]: I0120 06:56:18.588586 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfck\" (UniqueName: \"kubernetes.io/projected/eb945f54-1cfd-4248-85ea-34b880e5b4b5-kube-api-access-nkfck\") pod \"whisker-79cc8b6859-6sdc2\" (UID: \"eb945f54-1cfd-4248-85ea-34b880e5b4b5\") " pod="calico-system/whisker-79cc8b6859-6sdc2" Jan 20 06:56:18.588814 kubelet[2906]: I0120 06:56:18.588803 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb945f54-1cfd-4248-85ea-34b880e5b4b5-whisker-backend-key-pair\") pod \"whisker-79cc8b6859-6sdc2\" (UID: \"eb945f54-1cfd-4248-85ea-34b880e5b4b5\") " pod="calico-system/whisker-79cc8b6859-6sdc2" Jan 20 06:56:18.588927 kubelet[2906]: I0120 06:56:18.588877 2906 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb945f54-1cfd-4248-85ea-34b880e5b4b5-whisker-ca-bundle\") pod \"whisker-79cc8b6859-6sdc2\" (UID: \"eb945f54-1cfd-4248-85ea-34b880e5b4b5\") " pod="calico-system/whisker-79cc8b6859-6sdc2" Jan 20 06:56:18.798368 containerd[1680]: time="2026-01-20T06:56:18.798055367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79cc8b6859-6sdc2,Uid:eb945f54-1cfd-4248-85ea-34b880e5b4b5,Namespace:calico-system,Attempt:0,}" Jan 20 06:56:19.210157 containerd[1680]: time="2026-01-20T06:56:19.210082239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dkxm9,Uid:c83e0ef7-8277-4d9b-af20-88f781d4eb21,Namespace:kube-system,Attempt:0,}" Jan 20 06:56:19.212097 kubelet[2906]: I0120 06:56:19.212064 2906 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10f0b151-ea65-4a89-806a-f2fc08df9708" path="/var/lib/kubelet/pods/10f0b151-ea65-4a89-806a-f2fc08df9708/volumes" Jan 20 06:56:19.291000 audit: BPF prog-id=175 op=LOAD Jan 20 06:56:19.291000 audit[4364]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed73785c0 a2=98 a3=1fffffffffffffff items=0 ppid=4267 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.291000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:56:19.291000 audit: BPF prog-id=175 op=UNLOAD Jan 20 06:56:19.291000 audit[4364]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffed7378590 a3=0 items=0 ppid=4267 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.291000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:56:19.291000 audit: BPF prog-id=176 op=LOAD Jan 20 06:56:19.291000 audit[4364]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed73784a0 a2=94 a3=3 items=0 ppid=4267 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.291000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:56:19.291000 audit: BPF prog-id=176 op=UNLOAD Jan 20 06:56:19.291000 audit[4364]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffed73784a0 a2=94 a3=3 items=0 ppid=4267 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.291000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:56:19.291000 audit: BPF prog-id=177 op=LOAD Jan 20 06:56:19.291000 audit[4364]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed73784e0 a2=94 a3=7ffed73786c0 items=0 ppid=4267 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.291000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:56:19.291000 audit: BPF prog-id=177 op=UNLOAD Jan 20 06:56:19.291000 audit[4364]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffed73784e0 a2=94 a3=7ffed73786c0 items=0 ppid=4267 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.291000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:56:19.292000 audit: BPF prog-id=178 op=LOAD Jan 20 06:56:19.292000 audit[4367]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0a8d3820 a2=98 a3=3 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.292000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.292000 audit: BPF prog-id=178 op=UNLOAD Jan 20 06:56:19.292000 audit[4367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff0a8d37f0 a3=0 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.292000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.293000 audit: BPF prog-id=179 op=LOAD Jan 20 06:56:19.293000 audit[4367]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0a8d3610 a2=94 a3=54428f items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.293000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.293000 audit: BPF prog-id=179 op=UNLOAD Jan 20 06:56:19.293000 audit[4367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0a8d3610 a2=94 a3=54428f items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.293000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.293000 audit: BPF prog-id=180 op=LOAD Jan 20 06:56:19.293000 audit[4367]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0a8d3640 a2=94 a3=2 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.293000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.293000 audit: BPF prog-id=180 op=UNLOAD Jan 20 06:56:19.293000 audit[4367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0a8d3640 a2=0 a3=2 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.293000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.308626 systemd-networkd[1581]: calicd03fcb8825: Link UP Jan 20 06:56:19.309247 systemd-networkd[1581]: calicd03fcb8825: Gained carrier Jan 20 06:56:19.347061 containerd[1680]: 2026-01-20 06:56:18.830 [INFO][4239] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 06:56:19.347061 containerd[1680]: 2026-01-20 06:56:18.963 [INFO][4239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0 whisker-79cc8b6859- calico-system eb945f54-1cfd-4248-85ea-34b880e5b4b5 915 0 2026-01-20 06:56:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79cc8b6859 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4585-0-0-n-f719bce5cf whisker-79cc8b6859-6sdc2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicd03fcb8825 [] [] }} ContainerID="de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" Namespace="calico-system" Pod="whisker-79cc8b6859-6sdc2" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-" Jan 20 06:56:19.347061 containerd[1680]: 2026-01-20 06:56:18.964 [INFO][4239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" Namespace="calico-system" Pod="whisker-79cc8b6859-6sdc2" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0" Jan 20 06:56:19.347061 containerd[1680]: 2026-01-20 06:56:19.011 [INFO][4290] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" HandleID="k8s-pod-network.de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" Workload="ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0" Jan 20 06:56:19.347269 containerd[1680]: 2026-01-20 06:56:19.011 [INFO][4290] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" HandleID="k8s-pod-network.de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" Workload="ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4585-0-0-n-f719bce5cf", "pod":"whisker-79cc8b6859-6sdc2", "timestamp":"2026-01-20 06:56:19.011136998 +0000 UTC"}, Hostname:"ci-4585-0-0-n-f719bce5cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:56:19.347269 containerd[1680]: 2026-01-20 06:56:19.011 [INFO][4290] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:56:19.347269 containerd[1680]: 2026-01-20 06:56:19.011 [INFO][4290] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:56:19.347269 containerd[1680]: 2026-01-20 06:56:19.011 [INFO][4290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585-0-0-n-f719bce5cf' Jan 20 06:56:19.347269 containerd[1680]: 2026-01-20 06:56:19.022 [INFO][4290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.347269 containerd[1680]: 2026-01-20 06:56:19.028 [INFO][4290] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.347269 containerd[1680]: 2026-01-20 06:56:19.032 [INFO][4290] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.347269 containerd[1680]: 2026-01-20 06:56:19.034 [INFO][4290] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.347269 containerd[1680]: 2026-01-20 06:56:19.040 [INFO][4290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.347478 containerd[1680]: 2026-01-20 06:56:19.040 [INFO][4290] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.347478 containerd[1680]: 2026-01-20 06:56:19.042 [INFO][4290] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25 Jan 20 06:56:19.347478 containerd[1680]: 2026-01-20 06:56:19.050 [INFO][4290] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.347478 containerd[1680]: 2026-01-20 06:56:19.055 [INFO][4290] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.129/26] block=192.168.93.128/26 handle="k8s-pod-network.de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.347478 containerd[1680]: 2026-01-20 06:56:19.055 [INFO][4290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.129/26] handle="k8s-pod-network.de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.347478 containerd[1680]: 2026-01-20 06:56:19.056 [INFO][4290] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:56:19.347478 containerd[1680]: 2026-01-20 06:56:19.056 [INFO][4290] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.129/26] IPv6=[] ContainerID="de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" HandleID="k8s-pod-network.de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" Workload="ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0" Jan 20 06:56:19.347868 containerd[1680]: 2026-01-20 06:56:19.061 [INFO][4239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" Namespace="calico-system" Pod="whisker-79cc8b6859-6sdc2" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0", GenerateName:"whisker-79cc8b6859-", Namespace:"calico-system", SelfLink:"", UID:"eb945f54-1cfd-4248-85ea-34b880e5b4b5", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79cc8b6859", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"", Pod:"whisker-79cc8b6859-6sdc2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicd03fcb8825", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:19.347868 containerd[1680]: 2026-01-20 06:56:19.062 [INFO][4239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.129/32] ContainerID="de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" Namespace="calico-system" Pod="whisker-79cc8b6859-6sdc2" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0" Jan 20 06:56:19.348151 containerd[1680]: 2026-01-20 06:56:19.062 [INFO][4239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd03fcb8825 ContainerID="de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" Namespace="calico-system" Pod="whisker-79cc8b6859-6sdc2" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0" Jan 20 06:56:19.348151 containerd[1680]: 2026-01-20 06:56:19.333 [INFO][4239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" Namespace="calico-system" Pod="whisker-79cc8b6859-6sdc2" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0" Jan 20 06:56:19.348281 containerd[1680]: 2026-01-20 06:56:19.333 [INFO][4239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" Namespace="calico-system" Pod="whisker-79cc8b6859-6sdc2" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0", GenerateName:"whisker-79cc8b6859-", Namespace:"calico-system", SelfLink:"", UID:"eb945f54-1cfd-4248-85ea-34b880e5b4b5", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79cc8b6859", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25", Pod:"whisker-79cc8b6859-6sdc2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicd03fcb8825", MAC:"56:83:21:08:82:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:19.348506 containerd[1680]: 2026-01-20 06:56:19.343 [INFO][4239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" Namespace="calico-system" Pod="whisker-79cc8b6859-6sdc2" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-whisker--79cc8b6859--6sdc2-eth0" Jan 20 06:56:19.467008 systemd-networkd[1581]: cali50caeba528b: Link UP Jan 20 06:56:19.468265 systemd-networkd[1581]: cali50caeba528b: Gained carrier Jan 20 06:56:19.489879 containerd[1680]: 2026-01-20 06:56:19.387 [INFO][4386] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0 coredns-668d6bf9bc- kube-system c83e0ef7-8277-4d9b-af20-88f781d4eb21 809 0 2026-01-20 06:55:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4585-0-0-n-f719bce5cf coredns-668d6bf9bc-dkxm9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali50caeba528b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" Namespace="kube-system" Pod="coredns-668d6bf9bc-dkxm9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-" Jan 20 06:56:19.489879 containerd[1680]: 2026-01-20 06:56:19.387 [INFO][4386] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" Namespace="kube-system" Pod="coredns-668d6bf9bc-dkxm9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0" Jan 20 06:56:19.489879 containerd[1680]: 2026-01-20 06:56:19.421 [INFO][4398] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" HandleID="k8s-pod-network.bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" Workload="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0" Jan 20 06:56:19.490099 containerd[1680]: 2026-01-20 06:56:19.421 [INFO][4398] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" HandleID="k8s-pod-network.bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" Workload="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048efa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4585-0-0-n-f719bce5cf", "pod":"coredns-668d6bf9bc-dkxm9", "timestamp":"2026-01-20 06:56:19.421464693 +0000 UTC"}, Hostname:"ci-4585-0-0-n-f719bce5cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:56:19.490099 containerd[1680]: 2026-01-20 06:56:19.421 [INFO][4398] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:56:19.490099 containerd[1680]: 2026-01-20 06:56:19.421 [INFO][4398] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:56:19.490099 containerd[1680]: 2026-01-20 06:56:19.421 [INFO][4398] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585-0-0-n-f719bce5cf' Jan 20 06:56:19.490099 containerd[1680]: 2026-01-20 06:56:19.430 [INFO][4398] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.490099 containerd[1680]: 2026-01-20 06:56:19.434 [INFO][4398] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.490099 containerd[1680]: 2026-01-20 06:56:19.441 [INFO][4398] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.490099 containerd[1680]: 2026-01-20 06:56:19.442 [INFO][4398] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.490099 containerd[1680]: 2026-01-20 06:56:19.444 [INFO][4398] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.490283 containerd[1680]: 2026-01-20 06:56:19.444 [INFO][4398] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.490283 containerd[1680]: 2026-01-20 06:56:19.445 [INFO][4398] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95 Jan 20 06:56:19.490283 containerd[1680]: 2026-01-20 06:56:19.449 [INFO][4398] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.490283 containerd[1680]: 2026-01-20 06:56:19.456 [INFO][4398] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.130/26] block=192.168.93.128/26 handle="k8s-pod-network.bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.490283 containerd[1680]: 2026-01-20 06:56:19.456 [INFO][4398] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.130/26] handle="k8s-pod-network.bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:19.490283 containerd[1680]: 2026-01-20 06:56:19.456 [INFO][4398] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:56:19.490283 containerd[1680]: 2026-01-20 06:56:19.456 [INFO][4398] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.130/26] IPv6=[] ContainerID="bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" HandleID="k8s-pod-network.bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" Workload="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0" Jan 20 06:56:19.490493 containerd[1680]: 2026-01-20 06:56:19.460 [INFO][4386] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" Namespace="kube-system" Pod="coredns-668d6bf9bc-dkxm9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c83e0ef7-8277-4d9b-af20-88f781d4eb21", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"", Pod:"coredns-668d6bf9bc-dkxm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50caeba528b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:19.490493 containerd[1680]: 2026-01-20 06:56:19.460 [INFO][4386] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.130/32] ContainerID="bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" Namespace="kube-system" Pod="coredns-668d6bf9bc-dkxm9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0" Jan 20 06:56:19.490493 containerd[1680]: 2026-01-20 06:56:19.460 [INFO][4386] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50caeba528b ContainerID="bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" Namespace="kube-system" Pod="coredns-668d6bf9bc-dkxm9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0" Jan 20 06:56:19.490493 containerd[1680]: 2026-01-20 06:56:19.467 [INFO][4386] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" Namespace="kube-system" Pod="coredns-668d6bf9bc-dkxm9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0" Jan 20 06:56:19.490493 containerd[1680]: 2026-01-20 06:56:19.470 [INFO][4386] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" Namespace="kube-system" Pod="coredns-668d6bf9bc-dkxm9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c83e0ef7-8277-4d9b-af20-88f781d4eb21", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95", Pod:"coredns-668d6bf9bc-dkxm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50caeba528b", MAC:"66:8a:b3:df:89:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:19.490493 containerd[1680]: 2026-01-20 06:56:19.484 [INFO][4386] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" Namespace="kube-system" Pod="coredns-668d6bf9bc-dkxm9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--dkxm9-eth0" Jan 20 06:56:19.554000 audit: BPF prog-id=181 op=LOAD Jan 20 06:56:19.554000 audit[4367]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0a8d3500 a2=94 a3=1 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.554000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.554000 audit: BPF prog-id=181 op=UNLOAD Jan 20 06:56:19.554000 audit[4367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0a8d3500 a2=94 a3=1 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.554000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.565000 audit: BPF prog-id=182 op=LOAD Jan 20 06:56:19.565000 audit[4367]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0a8d34f0 a2=94 a3=4 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.565000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.565000 audit: BPF prog-id=182 op=UNLOAD Jan 20 06:56:19.565000 audit[4367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff0a8d34f0 a2=0 a3=4 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.565000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.565000 audit: BPF prog-id=183 op=LOAD Jan 20 06:56:19.565000 audit[4367]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff0a8d3350 a2=94 a3=5 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.565000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.565000 audit: BPF prog-id=183 op=UNLOAD Jan 20 06:56:19.565000 audit[4367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff0a8d3350 a2=0 a3=5 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.565000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.565000 audit: BPF prog-id=184 op=LOAD Jan 20 06:56:19.565000 audit[4367]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0a8d3570 a2=94 a3=6 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.565000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.565000 audit: BPF prog-id=184 op=UNLOAD Jan 20 06:56:19.565000 audit[4367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff0a8d3570 a2=0 a3=6 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.565000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.566000 audit: BPF prog-id=185 op=LOAD Jan 20 06:56:19.566000 audit[4367]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0a8d2d20 a2=94 a3=88 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.566000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.566000 audit: BPF prog-id=186 op=LOAD Jan 20 06:56:19.566000 audit[4367]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff0a8d2ba0 a2=94 a3=2 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.566000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.566000 audit: BPF prog-id=186 op=UNLOAD Jan 20 06:56:19.566000 audit[4367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff0a8d2bd0 a2=0 a3=7fff0a8d2cd0 items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.566000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.566000 audit: BPF prog-id=185 op=UNLOAD Jan 20 06:56:19.566000 audit[4367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=34011d10 a2=0 a3=28bbf92155314cdc items=0 ppid=4267 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.566000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:56:19.574000 audit: BPF prog-id=187 op=LOAD Jan 20 06:56:19.574000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe14cd4f50 a2=98 a3=1999999999999999 items=0 ppid=4267 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.574000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:56:19.575000 audit: BPF prog-id=187 op=UNLOAD Jan 20 06:56:19.575000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe14cd4f20 a3=0 items=0 ppid=4267 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.575000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:56:19.575000 audit: BPF prog-id=188 op=LOAD Jan 20 06:56:19.575000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe14cd4e30 a2=94 a3=ffff items=0 ppid=4267 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.575000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:56:19.575000 audit: BPF prog-id=188 op=UNLOAD Jan 20 06:56:19.575000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe14cd4e30 a2=94 a3=ffff items=0 ppid=4267 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.575000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:56:19.575000 audit: BPF prog-id=189 op=LOAD Jan 20 06:56:19.575000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe14cd4e70 a2=94 a3=7ffe14cd5050 items=0 ppid=4267 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.575000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:56:19.575000 audit: BPF prog-id=189 op=UNLOAD Jan 20 06:56:19.575000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe14cd4e70 a2=94 a3=7ffe14cd5050 items=0 ppid=4267 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.575000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:56:19.806393 systemd-networkd[1581]: vxlan.calico: Link UP Jan 20 06:56:19.806401 systemd-networkd[1581]: vxlan.calico: Gained carrier Jan 20 06:56:19.841000 audit: BPF prog-id=190 op=LOAD Jan 20 06:56:19.841000 audit[4462]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6a621910 a2=98 a3=0 items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.841000 audit: BPF prog-id=190 op=UNLOAD Jan 20 06:56:19.841000 audit[4462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc6a6218e0 a3=0 items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.841000 audit: BPF prog-id=191 op=LOAD Jan 20 06:56:19.841000 audit[4462]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6a621720 a2=94 a3=54428f items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.841000 audit: BPF prog-id=191 op=UNLOAD Jan 20 06:56:19.841000 audit[4462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc6a621720 a2=94 a3=54428f items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.841000 audit: BPF prog-id=192 op=LOAD Jan 20 06:56:19.841000 audit[4462]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6a621750 a2=94 a3=2 items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.841000 audit: BPF prog-id=192 op=UNLOAD Jan 20 06:56:19.841000 audit[4462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc6a621750 a2=0 a3=2 items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.841000 audit: BPF prog-id=193 op=LOAD Jan 20 06:56:19.841000 audit[4462]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6a621500 a2=94 a3=4 items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.841000 audit: BPF prog-id=193 op=UNLOAD Jan 20 06:56:19.841000 audit[4462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc6a621500 a2=94 a3=4 items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.841000 audit: BPF prog-id=194 op=LOAD Jan 20 06:56:19.841000 audit[4462]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6a621600 a2=94 a3=7ffc6a621780 items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.841000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.842000 audit: BPF prog-id=194 op=UNLOAD Jan 20 06:56:19.842000 audit[4462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc6a621600 a2=0 a3=7ffc6a621780 items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.842000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.843000 audit: BPF prog-id=195 op=LOAD Jan 20 06:56:19.843000 audit[4462]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6a620d30 a2=94 a3=2 items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.843000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.843000 audit: BPF prog-id=195 op=UNLOAD Jan 20 06:56:19.843000 audit[4462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc6a620d30 a2=0 a3=2 items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.843000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.843000 audit: BPF prog-id=196 op=LOAD Jan 20 06:56:19.843000 audit[4462]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6a620e30 a2=94 a3=30 items=0 ppid=4267 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.843000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:56:19.851000 audit: BPF prog-id=197 op=LOAD Jan 20 06:56:19.851000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee061c300 a2=98 a3=0 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.851000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:19.851000 audit: BPF prog-id=197 op=UNLOAD Jan 20 06:56:19.851000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffee061c2d0 a3=0 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.851000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:19.851000 audit: BPF prog-id=198 op=LOAD Jan 20 06:56:19.851000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffee061c0f0 a2=94 a3=54428f items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.851000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:19.851000 audit: BPF prog-id=198 op=UNLOAD Jan 20 06:56:19.851000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffee061c0f0 a2=94 a3=54428f items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.851000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:19.851000 audit: BPF prog-id=199 op=LOAD Jan 20 06:56:19.851000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffee061c120 a2=94 a3=2 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.851000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:19.851000 audit: BPF prog-id=199 op=UNLOAD Jan 20 06:56:19.851000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffee061c120 a2=0 a3=2 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:19.851000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.022000 audit: BPF prog-id=200 op=LOAD Jan 20 06:56:20.022000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffee061bfe0 a2=94 a3=1 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.022000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.023000 audit: BPF prog-id=200 op=UNLOAD Jan 20 06:56:20.023000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffee061bfe0 a2=94 a3=1 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.023000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.034000 audit: BPF prog-id=201 op=LOAD Jan 20 06:56:20.034000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffee061bfd0 a2=94 a3=4 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.034000 audit: BPF prog-id=201 op=UNLOAD Jan 20 06:56:20.034000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffee061bfd0 a2=0 a3=4 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.034000 audit: BPF prog-id=202 op=LOAD Jan 20 06:56:20.034000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffee061be30 a2=94 a3=5 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.034000 audit: BPF prog-id=202 op=UNLOAD Jan 20 06:56:20.034000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffee061be30 a2=0 a3=5 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.034000 audit: BPF prog-id=203 op=LOAD Jan 20 06:56:20.034000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffee061c050 a2=94 a3=6 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.034000 audit: BPF prog-id=203 op=UNLOAD Jan 20 06:56:20.034000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffee061c050 a2=0 a3=6 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.034000 audit: BPF prog-id=204 op=LOAD Jan 20 06:56:20.034000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffee061b800 a2=94 a3=88 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.034000 audit: BPF prog-id=205 op=LOAD Jan 20 06:56:20.034000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffee061b680 a2=94 a3=2 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.035000 audit: BPF prog-id=205 op=UNLOAD Jan 20 06:56:20.035000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffee061b6b0 a2=0 a3=7ffee061b7b0 items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.035000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.035000 audit: BPF prog-id=204 op=UNLOAD Jan 20 06:56:20.035000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=13b76d10 a2=0 a3=7c1001538fc74fff items=0 ppid=4267 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.035000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:56:20.041000 audit: BPF prog-id=196 op=UNLOAD Jan 20 06:56:20.041000 audit[4267]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000ebe300 a2=0 a3=0 items=0 ppid=4252 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.041000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 06:56:20.093000 audit[4493]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4493 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:56:20.093000 audit[4493]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff4fe8e4f0 a2=0 a3=7fff4fe8e4dc items=0 ppid=4267 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.093000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:56:20.099000 audit[4494]: NETFILTER_CFG table=mangle:120 family=2 entries=16 op=nft_register_chain pid=4494 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:56:20.099000 audit[4494]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff2f794ce0 a2=0 a3=7fff2f794ccc items=0 ppid=4267 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.099000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:56:20.104000 audit[4492]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4492 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:56:20.104000 audit[4492]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffda0ac7a90 a2=0 a3=7ffda0ac7a7c items=0 ppid=4267 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.104000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:56:20.116000 audit[4497]: NETFILTER_CFG table=filter:122 family=2 entries=128 op=nft_register_chain pid=4497 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:56:20.116000 audit[4497]: SYSCALL arch=c000003e syscall=46 success=yes exit=72768 a0=3 a1=7ffe7c75a9a0 a2=0 a3=7ffe7c75a98c items=0 ppid=4267 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.116000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:56:20.144950 containerd[1680]: time="2026-01-20T06:56:20.144901107Z" level=info msg="connecting to shim de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25" address="unix:///run/containerd/s/b2182b11b7f25dd9b18e8ba9851bd7a7960ff1481f799987f90a9489dfc66a69" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:56:20.148884 containerd[1680]: time="2026-01-20T06:56:20.148098991Z" level=info msg="connecting to shim bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95" address="unix:///run/containerd/s/3c016f582effd23b79c7f71555ef0c9a2ca147950235b21b7c25875870c228cc" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:56:20.189119 systemd[1]: Started cri-containerd-de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25.scope - libcontainer container de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25. Jan 20 06:56:20.193210 systemd[1]: Started cri-containerd-bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95.scope - libcontainer container bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95. Jan 20 06:56:20.206000 audit: BPF prog-id=206 op=LOAD Jan 20 06:56:20.207000 audit: BPF prog-id=207 op=LOAD Jan 20 06:56:20.207000 audit[4549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4521 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264386164646337656330383037643466383466616265356565366334 Jan 20 06:56:20.207000 audit: BPF prog-id=207 op=UNLOAD Jan 20 06:56:20.207000 audit[4549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264386164646337656330383037643466383466616265356565366334 Jan 20 06:56:20.207000 audit: BPF prog-id=208 op=LOAD Jan 20 06:56:20.207000 audit[4549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4521 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264386164646337656330383037643466383466616265356565366334 Jan 20 06:56:20.208000 audit: BPF prog-id=209 op=LOAD Jan 20 06:56:20.208000 audit[4549]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4521 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264386164646337656330383037643466383466616265356565366334 Jan 20 06:56:20.208000 audit: BPF prog-id=209 op=UNLOAD Jan 20 06:56:20.208000 audit[4549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264386164646337656330383037643466383466616265356565366334 Jan 20 06:56:20.208000 audit: BPF prog-id=208 op=UNLOAD Jan 20 06:56:20.208000 audit[4549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264386164646337656330383037643466383466616265356565366334 Jan 20 06:56:20.208000 audit: BPF prog-id=210 op=LOAD Jan 20 06:56:20.208000 audit[4549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4521 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264386164646337656330383037643466383466616265356565366334 Jan 20 06:56:20.211124 containerd[1680]: time="2026-01-20T06:56:20.211016465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8j6t9,Uid:3f79bfe2-fd9a-4aff-ac23-745eeb4426b7,Namespace:calico-system,Attempt:0,}" Jan 20 06:56:20.216000 audit: BPF prog-id=211 op=LOAD Jan 20 06:56:20.217000 audit: BPF prog-id=212 op=LOAD Jan 20 06:56:20.217000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4517 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303866376236353061306564386430303932663462613634343136 Jan 20 06:56:20.218000 audit: BPF prog-id=212 op=UNLOAD Jan 20 06:56:20.218000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303866376236353061306564386430303932663462613634343136 Jan 20 06:56:20.218000 audit: BPF prog-id=213 op=LOAD Jan 20 06:56:20.218000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4517 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303866376236353061306564386430303932663462613634343136 Jan 20 06:56:20.218000 audit: BPF prog-id=214 op=LOAD Jan 20 06:56:20.218000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4517 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303866376236353061306564386430303932663462613634343136 Jan 20 06:56:20.218000 audit: BPF prog-id=214 op=UNLOAD Jan 20 06:56:20.218000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303866376236353061306564386430303932663462613634343136 Jan 20 06:56:20.218000 audit: BPF prog-id=213 op=UNLOAD Jan 20 06:56:20.218000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4517 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303866376236353061306564386430303932663462613634343136 Jan 20 06:56:20.218000 audit: BPF prog-id=215 op=LOAD Jan 20 06:56:20.218000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4517 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.218000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465303866376236353061306564386430303932663462613634343136 Jan 20 06:56:20.261542 containerd[1680]: time="2026-01-20T06:56:20.261505210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dkxm9,Uid:c83e0ef7-8277-4d9b-af20-88f781d4eb21,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95\"" Jan 20 06:56:20.264481 containerd[1680]: time="2026-01-20T06:56:20.264454389Z" level=info msg="CreateContainer within sandbox \"bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 06:56:20.282617 containerd[1680]: time="2026-01-20T06:56:20.282538729Z" level=info msg="Container 69447736e6acc5710fcc1c9db4ec2d8afeb8d52ecf364b7ec1ff5689fb2b2f91: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:56:20.297077 containerd[1680]: time="2026-01-20T06:56:20.297045325Z" level=info msg="CreateContainer within sandbox \"bd8addc7ec0807d4f84fabe5ee6c4348036cf461577abd6576406636e0e09e95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69447736e6acc5710fcc1c9db4ec2d8afeb8d52ecf364b7ec1ff5689fb2b2f91\"" Jan 20 06:56:20.299030 containerd[1680]: time="2026-01-20T06:56:20.299005711Z" level=info msg="StartContainer for \"69447736e6acc5710fcc1c9db4ec2d8afeb8d52ecf364b7ec1ff5689fb2b2f91\"" Jan 20 06:56:20.300903 containerd[1680]: time="2026-01-20T06:56:20.300879500Z" level=info msg="connecting to shim 69447736e6acc5710fcc1c9db4ec2d8afeb8d52ecf364b7ec1ff5689fb2b2f91" address="unix:///run/containerd/s/3c016f582effd23b79c7f71555ef0c9a2ca147950235b21b7c25875870c228cc" protocol=ttrpc version=3 Jan 20 06:56:20.307071 containerd[1680]: time="2026-01-20T06:56:20.307006240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79cc8b6859-6sdc2,Uid:eb945f54-1cfd-4248-85ea-34b880e5b4b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"de08f7b650a0ed8d0092f4ba6441662049fa2b936c245881f347919be6e0df25\"" Jan 20 06:56:20.309464 containerd[1680]: time="2026-01-20T06:56:20.309426478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:56:20.340231 systemd[1]: Started cri-containerd-69447736e6acc5710fcc1c9db4ec2d8afeb8d52ecf364b7ec1ff5689fb2b2f91.scope - libcontainer container 69447736e6acc5710fcc1c9db4ec2d8afeb8d52ecf364b7ec1ff5689fb2b2f91. Jan 20 06:56:20.358000 audit: BPF prog-id=216 op=LOAD Jan 20 06:56:20.359000 audit: BPF prog-id=217 op=LOAD Jan 20 06:56:20.359000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4521 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639343437373336653661636335373130666363316339646234656332 Jan 20 06:56:20.359000 audit: BPF prog-id=217 op=UNLOAD Jan 20 06:56:20.359000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639343437373336653661636335373130666363316339646234656332 Jan 20 06:56:20.359000 audit: BPF prog-id=218 op=LOAD Jan 20 06:56:20.359000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4521 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639343437373336653661636335373130666363316339646234656332 Jan 20 06:56:20.360000 audit: BPF prog-id=219 op=LOAD Jan 20 06:56:20.360000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4521 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639343437373336653661636335373130666363316339646234656332 Jan 20 06:56:20.360000 audit: BPF prog-id=219 op=UNLOAD Jan 20 06:56:20.360000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639343437373336653661636335373130666363316339646234656332 Jan 20 06:56:20.360000 audit: BPF prog-id=218 op=UNLOAD Jan 20 06:56:20.360000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639343437373336653661636335373130666363316339646234656332 Jan 20 06:56:20.360000 audit: BPF prog-id=220 op=LOAD Jan 20 06:56:20.360000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4521 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639343437373336653661636335373130666363316339646234656332 Jan 20 06:56:20.368254 systemd-networkd[1581]: calicd03fcb8825: Gained IPv6LL Jan 20 06:56:20.391595 systemd-networkd[1581]: cali54bebfffc62: Link UP Jan 20 06:56:20.391774 systemd-networkd[1581]: cali54bebfffc62: Gained carrier Jan 20 06:56:20.400666 containerd[1680]: time="2026-01-20T06:56:20.400592208Z" level=info msg="StartContainer for \"69447736e6acc5710fcc1c9db4ec2d8afeb8d52ecf364b7ec1ff5689fb2b2f91\" returns successfully" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.267 [INFO][4579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0 goldmane-666569f655- calico-system 3f79bfe2-fd9a-4aff-ac23-745eeb4426b7 806 0 2026-01-20 06:55:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4585-0-0-n-f719bce5cf goldmane-666569f655-8j6t9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali54bebfffc62 [] [] }} ContainerID="442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" Namespace="calico-system" Pod="goldmane-666569f655-8j6t9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.267 [INFO][4579] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" Namespace="calico-system" Pod="goldmane-666569f655-8j6t9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.315 [INFO][4600] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" HandleID="k8s-pod-network.442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" Workload="ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.315 [INFO][4600] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" HandleID="k8s-pod-network.442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" Workload="ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4585-0-0-n-f719bce5cf", "pod":"goldmane-666569f655-8j6t9", "timestamp":"2026-01-20 06:56:20.315438609 +0000 UTC"}, Hostname:"ci-4585-0-0-n-f719bce5cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.315 [INFO][4600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.315 [INFO][4600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.315 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585-0-0-n-f719bce5cf' Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.326 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.337 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.347 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.349 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.353 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.353 [INFO][4600] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.354 [INFO][4600] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319 Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.360 [INFO][4600] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.369 [INFO][4600] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.131/26] block=192.168.93.128/26 handle="k8s-pod-network.442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.369 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.131/26] handle="k8s-pod-network.442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.369 [INFO][4600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:56:20.411433 containerd[1680]: 2026-01-20 06:56:20.369 [INFO][4600] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.131/26] IPv6=[] ContainerID="442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" HandleID="k8s-pod-network.442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" Workload="ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0" Jan 20 06:56:20.412690 containerd[1680]: 2026-01-20 06:56:20.385 [INFO][4579] cni-plugin/k8s.go 418: Populated endpoint ContainerID="442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" Namespace="calico-system" Pod="goldmane-666569f655-8j6t9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3f79bfe2-fd9a-4aff-ac23-745eeb4426b7", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"", Pod:"goldmane-666569f655-8j6t9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali54bebfffc62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:20.412690 containerd[1680]: 2026-01-20 06:56:20.386 [INFO][4579] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.131/32] ContainerID="442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" Namespace="calico-system" Pod="goldmane-666569f655-8j6t9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0" Jan 20 06:56:20.412690 containerd[1680]: 2026-01-20 06:56:20.386 [INFO][4579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54bebfffc62 ContainerID="442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" Namespace="calico-system" Pod="goldmane-666569f655-8j6t9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0" Jan 20 06:56:20.412690 containerd[1680]: 2026-01-20 06:56:20.391 [INFO][4579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" Namespace="calico-system" Pod="goldmane-666569f655-8j6t9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0" Jan 20 06:56:20.412690 containerd[1680]: 2026-01-20 06:56:20.392 [INFO][4579] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" Namespace="calico-system" Pod="goldmane-666569f655-8j6t9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3f79bfe2-fd9a-4aff-ac23-745eeb4426b7", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319", Pod:"goldmane-666569f655-8j6t9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali54bebfffc62", MAC:"22:49:6e:4d:fa:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:20.412690 containerd[1680]: 2026-01-20 06:56:20.404 [INFO][4579] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" Namespace="calico-system" Pod="goldmane-666569f655-8j6t9" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-goldmane--666569f655--8j6t9-eth0" Jan 20 06:56:20.454573 containerd[1680]: time="2026-01-20T06:56:20.454521479Z" level=info msg="connecting to shim 442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319" address="unix:///run/containerd/s/dfdcb9577090ee5390c6637544643c05d16372c2c07b7645d1fb2ad88801ed69" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:56:20.479000 audit[4653]: NETFILTER_CFG table=filter:123 family=2 entries=54 op=nft_register_chain pid=4653 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:56:20.479000 audit[4653]: SYSCALL arch=c000003e syscall=46 success=yes exit=29220 a0=3 a1=7ffeee7c1700 a2=0 a3=7ffeee7c16ec items=0 ppid=4267 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.479000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:56:20.494117 systemd[1]: Started cri-containerd-442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319.scope - libcontainer container 442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319. Jan 20 06:56:20.505000 audit[4696]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=4696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:20.505000 audit[4696]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffc93c30f0 a2=0 a3=7fffc93c30dc items=0 ppid=3052 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:20.507000 audit: BPF prog-id=221 op=LOAD Jan 20 06:56:20.507000 audit: BPF prog-id=222 op=LOAD Jan 20 06:56:20.507000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4662 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326433313034396637656130653034366166663331613637616137 Jan 20 06:56:20.507000 audit: BPF prog-id=222 op=UNLOAD Jan 20 06:56:20.507000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4662 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326433313034396637656130653034366166663331613637616137 Jan 20 06:56:20.507000 audit: BPF prog-id=223 op=LOAD Jan 20 06:56:20.507000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4662 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326433313034396637656130653034366166663331613637616137 Jan 20 06:56:20.507000 audit: BPF prog-id=224 op=LOAD Jan 20 06:56:20.507000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4662 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326433313034396637656130653034366166663331613637616137 Jan 20 06:56:20.507000 audit: BPF prog-id=224 op=UNLOAD Jan 20 06:56:20.507000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4662 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326433313034396637656130653034366166663331613637616137 Jan 20 06:56:20.508000 audit: BPF prog-id=223 op=UNLOAD Jan 20 06:56:20.508000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4662 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326433313034396637656130653034366166663331613637616137 Jan 20 06:56:20.508000 audit: BPF prog-id=225 op=LOAD Jan 20 06:56:20.508000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4662 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326433313034396637656130653034366166663331613637616137 Jan 20 06:56:20.511000 audit[4696]: NETFILTER_CFG table=nat:125 family=2 entries=14 op=nft_register_rule pid=4696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:20.511000 audit[4696]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffc93c30f0 a2=0 a3=0 items=0 ppid=3052 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:20.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:20.546605 containerd[1680]: time="2026-01-20T06:56:20.546568889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8j6t9,Uid:3f79bfe2-fd9a-4aff-ac23-745eeb4426b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"442d31049f7ea0e046aff31a67aa7f6768dad3358899ea23cf07fecf7ade5319\"" Jan 20 06:56:20.651500 containerd[1680]: time="2026-01-20T06:56:20.651331743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:20.653695 containerd[1680]: time="2026-01-20T06:56:20.653632147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:56:20.653695 containerd[1680]: time="2026-01-20T06:56:20.653682276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:20.653950 kubelet[2906]: E0120 06:56:20.653908 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:56:20.654350 kubelet[2906]: E0120 06:56:20.653956 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:56:20.654689 containerd[1680]: time="2026-01-20T06:56:20.654654732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:56:20.660548 kubelet[2906]: E0120 06:56:20.660463 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:037c0c1e8cbe4b15a6588620b1762857,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nkfck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79cc8b6859-6sdc2_calico-system(eb945f54-1cfd-4248-85ea-34b880e5b4b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:20.880096 systemd-networkd[1581]: cali50caeba528b: Gained IPv6LL Jan 20 06:56:21.003082 containerd[1680]: time="2026-01-20T06:56:21.003038772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:21.004750 containerd[1680]: time="2026-01-20T06:56:21.004713527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:56:21.004817 containerd[1680]: time="2026-01-20T06:56:21.004803605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:21.005014 kubelet[2906]: E0120 06:56:21.004986 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:56:21.005060 kubelet[2906]: E0120 06:56:21.005027 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:56:21.005277 kubelet[2906]: E0120 06:56:21.005239 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-khrnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8j6t9_calico-system(3f79bfe2-fd9a-4aff-ac23-745eeb4426b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:21.005640 containerd[1680]: time="2026-01-20T06:56:21.005617582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:56:21.007048 kubelet[2906]: E0120 06:56:21.007012 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:56:21.210774 containerd[1680]: time="2026-01-20T06:56:21.210738646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8w7k,Uid:506bd27e-3197-4d34-a858-e04017d318df,Namespace:calico-system,Attempt:0,}" Jan 20 06:56:21.321082 systemd-networkd[1581]: cali970820a5a4d: Link UP Jan 20 06:56:21.322182 systemd-networkd[1581]: cali970820a5a4d: Gained carrier Jan 20 06:56:21.333026 kubelet[2906]: I0120 06:56:21.332968 2906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dkxm9" podStartSLOduration=55.332945784 podStartE2EDuration="55.332945784s" podCreationTimestamp="2026-01-20 06:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:56:20.430372972 +0000 UTC m=+59.342010749" watchObservedRunningTime="2026-01-20 06:56:21.332945784 +0000 UTC m=+60.244583564" Jan 20 06:56:21.333894 containerd[1680]: time="2026-01-20T06:56:21.333821685Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:21.335918 containerd[1680]: time="2026-01-20T06:56:21.335880705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:56:21.336181 containerd[1680]: time="2026-01-20T06:56:21.335925321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:21.336270 kubelet[2906]: E0120 06:56:21.336240 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:56:21.336325 kubelet[2906]: E0120 06:56:21.336278 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:56:21.337846 kubelet[2906]: E0120 06:56:21.337014 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkfck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79cc8b6859-6sdc2_calico-system(eb945f54-1cfd-4248-85ea-34b880e5b4b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.252 [INFO][4706] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0 csi-node-driver- calico-system 506bd27e-3197-4d34-a858-e04017d318df 689 0 2026-01-20 06:55:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4585-0-0-n-f719bce5cf csi-node-driver-j8w7k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali970820a5a4d [] [] }} ContainerID="6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" Namespace="calico-system" Pod="csi-node-driver-j8w7k" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.253 [INFO][4706] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" Namespace="calico-system" Pod="csi-node-driver-j8w7k" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.283 [INFO][4717] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" HandleID="k8s-pod-network.6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" Workload="ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.283 [INFO][4717] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" HandleID="k8s-pod-network.6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" Workload="ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4585-0-0-n-f719bce5cf", "pod":"csi-node-driver-j8w7k", "timestamp":"2026-01-20 06:56:21.283475871 +0000 UTC"}, Hostname:"ci-4585-0-0-n-f719bce5cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.283 [INFO][4717] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.283 [INFO][4717] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.283 [INFO][4717] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585-0-0-n-f719bce5cf' Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.290 [INFO][4717] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.296 [INFO][4717] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.299 [INFO][4717] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.301 [INFO][4717] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.304 [INFO][4717] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.304 [INFO][4717] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.305 [INFO][4717] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342 Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.310 [INFO][4717] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.316 [INFO][4717] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.132/26] block=192.168.93.128/26 handle="k8s-pod-network.6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.316 [INFO][4717] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.132/26] handle="k8s-pod-network.6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.316 [INFO][4717] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:56:21.337989 containerd[1680]: 2026-01-20 06:56:21.316 [INFO][4717] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.132/26] IPv6=[] ContainerID="6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" HandleID="k8s-pod-network.6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" Workload="ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0" Jan 20 06:56:21.338449 containerd[1680]: 2026-01-20 06:56:21.318 [INFO][4706] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" Namespace="calico-system" Pod="csi-node-driver-j8w7k" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"506bd27e-3197-4d34-a858-e04017d318df", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"", Pod:"csi-node-driver-j8w7k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali970820a5a4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:21.338449 containerd[1680]: 2026-01-20 06:56:21.318 [INFO][4706] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.132/32] ContainerID="6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" Namespace="calico-system" Pod="csi-node-driver-j8w7k" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0" Jan 20 06:56:21.338449 containerd[1680]: 2026-01-20 06:56:21.318 [INFO][4706] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali970820a5a4d ContainerID="6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" Namespace="calico-system" Pod="csi-node-driver-j8w7k" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0" Jan 20 06:56:21.338449 containerd[1680]: 2026-01-20 06:56:21.322 [INFO][4706] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" Namespace="calico-system" Pod="csi-node-driver-j8w7k" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0" Jan 20 06:56:21.338449 containerd[1680]: 2026-01-20 06:56:21.322 [INFO][4706] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" Namespace="calico-system" Pod="csi-node-driver-j8w7k" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"506bd27e-3197-4d34-a858-e04017d318df", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342", Pod:"csi-node-driver-j8w7k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali970820a5a4d", MAC:"0a:aa:29:5b:8c:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:21.338449 containerd[1680]: 2026-01-20 06:56:21.334 [INFO][4706] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" Namespace="calico-system" Pod="csi-node-driver-j8w7k" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-csi--node--driver--j8w7k-eth0" Jan 20 06:56:21.339383 kubelet[2906]: E0120 06:56:21.338255 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:56:21.353000 audit[4731]: NETFILTER_CFG table=filter:126 family=2 entries=40 op=nft_register_chain pid=4731 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:56:21.353000 audit[4731]: SYSCALL arch=c000003e syscall=46 success=yes exit=20748 a0=3 a1=7ffe9a4eddd0 a2=0 a3=7ffe9a4eddbc items=0 ppid=4267 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.353000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:56:21.371506 containerd[1680]: time="2026-01-20T06:56:21.371466230Z" level=info msg="connecting to shim 6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342" address="unix:///run/containerd/s/13dd6b4ddf407966d45f4cee7f631a7a1910ba647ebc2f99d388d187c19ad3a5" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:56:21.404094 systemd[1]: Started cri-containerd-6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342.scope - libcontainer container 6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342. Jan 20 06:56:21.415000 audit: BPF prog-id=226 op=LOAD Jan 20 06:56:21.416000 audit: BPF prog-id=227 op=LOAD Jan 20 06:56:21.416000 audit[4751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4740 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631363665303537316664646539653234663234383039356265386630 Jan 20 06:56:21.416000 audit: BPF prog-id=227 op=UNLOAD Jan 20 06:56:21.416000 audit[4751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4740 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631363665303537316664646539653234663234383039356265386630 Jan 20 06:56:21.416000 audit: BPF prog-id=228 op=LOAD Jan 20 06:56:21.416000 audit[4751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4740 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631363665303537316664646539653234663234383039356265386630 Jan 20 06:56:21.416000 audit: BPF prog-id=229 op=LOAD Jan 20 06:56:21.416000 audit[4751]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4740 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631363665303537316664646539653234663234383039356265386630 Jan 20 06:56:21.416000 audit: BPF prog-id=229 op=UNLOAD Jan 20 06:56:21.416000 audit[4751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4740 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631363665303537316664646539653234663234383039356265386630 Jan 20 06:56:21.416000 audit: BPF prog-id=228 op=UNLOAD Jan 20 06:56:21.416000 audit[4751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4740 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631363665303537316664646539653234663234383039356265386630 Jan 20 06:56:21.416000 audit: BPF prog-id=230 op=LOAD Jan 20 06:56:21.416000 audit[4751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4740 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631363665303537316664646539653234663234383039356265386630 Jan 20 06:56:21.420893 kubelet[2906]: E0120 06:56:21.420868 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:56:21.421933 kubelet[2906]: E0120 06:56:21.421901 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:56:21.466204 containerd[1680]: time="2026-01-20T06:56:21.466163983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8w7k,Uid:506bd27e-3197-4d34-a858-e04017d318df,Namespace:calico-system,Attempt:0,} returns sandbox id \"6166e0571fdde9e24f248095be8f02ce430e5554b961ad836104aa77cd357342\"" Jan 20 06:56:21.470844 containerd[1680]: time="2026-01-20T06:56:21.470587505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:56:21.519948 systemd-networkd[1581]: vxlan.calico: Gained IPv6LL Jan 20 06:56:21.562000 audit[4777]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:21.562000 audit[4777]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff902fb260 a2=0 a3=7fff902fb24c items=0 ppid=3052 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:21.572000 audit[4777]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:21.572000 audit[4777]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff902fb260 a2=0 a3=0 items=0 ppid=3052 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.572000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:21.584000 audit[4779]: NETFILTER_CFG table=filter:129 family=2 entries=17 op=nft_register_rule pid=4779 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:21.584000 audit[4779]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc7106de10 a2=0 a3=7ffc7106ddfc items=0 ppid=3052 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:21.593000 audit[4779]: NETFILTER_CFG table=nat:130 family=2 entries=35 op=nft_register_chain pid=4779 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:21.593000 audit[4779]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc7106de10 a2=0 a3=7ffc7106ddfc items=0 ppid=3052 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:21.593000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:21.816345 containerd[1680]: time="2026-01-20T06:56:21.812250129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:21.816345 containerd[1680]: time="2026-01-20T06:56:21.813919155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:56:21.816345 containerd[1680]: time="2026-01-20T06:56:21.813998601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:21.816345 containerd[1680]: time="2026-01-20T06:56:21.816245633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:56:21.816535 kubelet[2906]: E0120 06:56:21.814120 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:56:21.816535 kubelet[2906]: E0120 06:56:21.814157 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:56:21.816535 kubelet[2906]: E0120 06:56:21.814259 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vggc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:22.159183 containerd[1680]: time="2026-01-20T06:56:22.159117059Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:22.160869 containerd[1680]: time="2026-01-20T06:56:22.160833399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:56:22.160963 containerd[1680]: time="2026-01-20T06:56:22.160912075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:22.161108 kubelet[2906]: E0120 06:56:22.161058 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:56:22.161108 kubelet[2906]: E0120 06:56:22.161100 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:56:22.161268 kubelet[2906]: E0120 06:56:22.161197 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vggc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:22.162570 kubelet[2906]: E0120 06:56:22.162542 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:56:22.209943 containerd[1680]: time="2026-01-20T06:56:22.209909330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-jx5cs,Uid:de3eadd9-d35e-43b1-acf5-88fe04381bf9,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:56:22.210200 containerd[1680]: time="2026-01-20T06:56:22.209916042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5pgm,Uid:62cdcd04-03ba-4af6-a274-3cc1be14a458,Namespace:kube-system,Attempt:0,}" Jan 20 06:56:22.334806 systemd-networkd[1581]: cali14ded684e99: Link UP Jan 20 06:56:22.335433 systemd-networkd[1581]: cali14ded684e99: Gained carrier Jan 20 06:56:22.352403 systemd-networkd[1581]: cali54bebfffc62: Gained IPv6LL Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.263 [INFO][4784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0 coredns-668d6bf9bc- kube-system 62cdcd04-03ba-4af6-a274-3cc1be14a458 801 0 2026-01-20 06:55:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4585-0-0-n-f719bce5cf coredns-668d6bf9bc-g5pgm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali14ded684e99 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5pgm" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.263 [INFO][4784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5pgm" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.292 [INFO][4810] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" HandleID="k8s-pod-network.79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" Workload="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.292 [INFO][4810] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" HandleID="k8s-pod-network.79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" Workload="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4585-0-0-n-f719bce5cf", "pod":"coredns-668d6bf9bc-g5pgm", "timestamp":"2026-01-20 06:56:22.292704605 +0000 UTC"}, Hostname:"ci-4585-0-0-n-f719bce5cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.292 [INFO][4810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.293 [INFO][4810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.293 [INFO][4810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585-0-0-n-f719bce5cf' Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.302 [INFO][4810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.306 [INFO][4810] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.309 [INFO][4810] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.310 [INFO][4810] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.312 [INFO][4810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.312 [INFO][4810] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.313 [INFO][4810] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.318 [INFO][4810] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.328 [INFO][4810] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.133/26] block=192.168.93.128/26 handle="k8s-pod-network.79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.328 [INFO][4810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.133/26] handle="k8s-pod-network.79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.328 [INFO][4810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:56:22.354484 containerd[1680]: 2026-01-20 06:56:22.328 [INFO][4810] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.133/26] IPv6=[] ContainerID="79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" HandleID="k8s-pod-network.79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" Workload="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0" Jan 20 06:56:22.358155 containerd[1680]: 2026-01-20 06:56:22.330 [INFO][4784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5pgm" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"62cdcd04-03ba-4af6-a274-3cc1be14a458", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"", Pod:"coredns-668d6bf9bc-g5pgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14ded684e99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:22.358155 containerd[1680]: 2026-01-20 06:56:22.331 [INFO][4784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.133/32] ContainerID="79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5pgm" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0" Jan 20 06:56:22.358155 containerd[1680]: 2026-01-20 06:56:22.331 [INFO][4784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14ded684e99 ContainerID="79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5pgm" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0" Jan 20 06:56:22.358155 containerd[1680]: 2026-01-20 06:56:22.335 [INFO][4784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5pgm" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0" Jan 20 06:56:22.358155 containerd[1680]: 2026-01-20 06:56:22.336 [INFO][4784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5pgm" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"62cdcd04-03ba-4af6-a274-3cc1be14a458", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c", Pod:"coredns-668d6bf9bc-g5pgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14ded684e99", MAC:"82:ac:f7:14:cc:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:22.358155 containerd[1680]: 2026-01-20 06:56:22.348 [INFO][4784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5pgm" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-coredns--668d6bf9bc--g5pgm-eth0" Jan 20 06:56:22.368000 audit[4829]: NETFILTER_CFG table=filter:131 family=2 entries=40 op=nft_register_chain pid=4829 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:56:22.368000 audit[4829]: SYSCALL arch=c000003e syscall=46 success=yes exit=20328 a0=3 a1=7ffc89fe18c0 a2=0 a3=7ffc89fe18ac items=0 ppid=4267 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.368000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:56:22.392804 containerd[1680]: time="2026-01-20T06:56:22.392696946Z" level=info msg="connecting to shim 79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c" address="unix:///run/containerd/s/6881709f28f6a3a8ec7f3cfa6e73b923812788e5b2afab3e0a55fbd440a58394" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:56:22.426368 kubelet[2906]: E0120 06:56:22.425499 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:56:22.433375 kubelet[2906]: E0120 06:56:22.432629 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:56:22.435313 systemd[1]: Started cri-containerd-79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c.scope - libcontainer container 79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c. Jan 20 06:56:22.459480 systemd-networkd[1581]: cali63e868f6f66: Link UP Jan 20 06:56:22.461208 systemd-networkd[1581]: cali63e868f6f66: Gained carrier Jan 20 06:56:22.477000 audit: BPF prog-id=231 op=LOAD Jan 20 06:56:22.478000 audit: BPF prog-id=232 op=LOAD Jan 20 06:56:22.478000 audit[4850]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4839 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653662313362663637383033363766303761656537326565653139 Jan 20 06:56:22.478000 audit: BPF prog-id=232 op=UNLOAD Jan 20 06:56:22.478000 audit[4850]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4839 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653662313362663637383033363766303761656537326565653139 Jan 20 06:56:22.478000 audit: BPF prog-id=233 op=LOAD Jan 20 06:56:22.478000 audit[4850]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4839 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653662313362663637383033363766303761656537326565653139 Jan 20 06:56:22.478000 audit: BPF prog-id=234 op=LOAD Jan 20 06:56:22.478000 audit[4850]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4839 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653662313362663637383033363766303761656537326565653139 Jan 20 06:56:22.479000 audit: BPF prog-id=234 op=UNLOAD Jan 20 06:56:22.479000 audit[4850]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4839 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653662313362663637383033363766303761656537326565653139 Jan 20 06:56:22.479000 audit: BPF prog-id=233 op=UNLOAD Jan 20 06:56:22.479000 audit[4850]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4839 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653662313362663637383033363766303761656537326565653139 Jan 20 06:56:22.479000 audit: BPF prog-id=235 op=LOAD Jan 20 06:56:22.479000 audit[4850]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4839 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739653662313362663637383033363766303761656537326565653139 Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.260 [INFO][4780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0 calico-apiserver-597cf6c9f4- calico-apiserver de3eadd9-d35e-43b1-acf5-88fe04381bf9 804 0 2026-01-20 06:55:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:597cf6c9f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4585-0-0-n-f719bce5cf calico-apiserver-597cf6c9f4-jx5cs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali63e868f6f66 [] [] }} ContainerID="22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-jx5cs" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.261 [INFO][4780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-jx5cs" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.302 [INFO][4805] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" HandleID="k8s-pod-network.22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" Workload="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.302 [INFO][4805] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" HandleID="k8s-pod-network.22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" Workload="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4585-0-0-n-f719bce5cf", "pod":"calico-apiserver-597cf6c9f4-jx5cs", "timestamp":"2026-01-20 06:56:22.302418643 +0000 UTC"}, Hostname:"ci-4585-0-0-n-f719bce5cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.302 [INFO][4805] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.328 [INFO][4805] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.328 [INFO][4805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585-0-0-n-f719bce5cf' Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.404 [INFO][4805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.411 [INFO][4805] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.416 [INFO][4805] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.418 [INFO][4805] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.422 [INFO][4805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.423 [INFO][4805] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.427 [INFO][4805] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4 Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.438 [INFO][4805] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.450 [INFO][4805] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.134/26] block=192.168.93.128/26 handle="k8s-pod-network.22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.450 [INFO][4805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.134/26] handle="k8s-pod-network.22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.450 [INFO][4805] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:56:22.483331 containerd[1680]: 2026-01-20 06:56:22.450 [INFO][4805] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.134/26] IPv6=[] ContainerID="22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" HandleID="k8s-pod-network.22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" Workload="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0" Jan 20 06:56:22.484748 containerd[1680]: 2026-01-20 06:56:22.454 [INFO][4780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-jx5cs" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0", GenerateName:"calico-apiserver-597cf6c9f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"de3eadd9-d35e-43b1-acf5-88fe04381bf9", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597cf6c9f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"", Pod:"calico-apiserver-597cf6c9f4-jx5cs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63e868f6f66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:22.484748 containerd[1680]: 2026-01-20 06:56:22.454 [INFO][4780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.134/32] ContainerID="22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-jx5cs" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0" Jan 20 06:56:22.484748 containerd[1680]: 2026-01-20 06:56:22.454 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63e868f6f66 ContainerID="22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-jx5cs" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0" Jan 20 06:56:22.484748 containerd[1680]: 2026-01-20 06:56:22.461 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-jx5cs" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0" Jan 20 06:56:22.484748 containerd[1680]: 2026-01-20 06:56:22.463 [INFO][4780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-jx5cs" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0", GenerateName:"calico-apiserver-597cf6c9f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"de3eadd9-d35e-43b1-acf5-88fe04381bf9", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597cf6c9f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4", Pod:"calico-apiserver-597cf6c9f4-jx5cs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63e868f6f66", MAC:"a2:f7:5b:92:fc:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:22.484748 containerd[1680]: 2026-01-20 06:56:22.478 [INFO][4780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-jx5cs" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--jx5cs-eth0" Jan 20 06:56:22.500000 audit[4877]: NETFILTER_CFG table=filter:132 family=2 entries=62 op=nft_register_chain pid=4877 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:56:22.500000 audit[4877]: SYSCALL arch=c000003e syscall=46 success=yes exit=31756 a0=3 a1=7ffdd6792190 a2=0 a3=7ffdd679217c items=0 ppid=4267 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.500000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:56:22.519933 containerd[1680]: time="2026-01-20T06:56:22.519814573Z" level=info msg="connecting to shim 22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4" address="unix:///run/containerd/s/aca5dc1ca644ba5ccc33a0c0c02cf4254fd682038be91a71c8a8f850f795a609" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:56:22.530961 containerd[1680]: time="2026-01-20T06:56:22.530902384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5pgm,Uid:62cdcd04-03ba-4af6-a274-3cc1be14a458,Namespace:kube-system,Attempt:0,} returns sandbox id \"79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c\"" Jan 20 06:56:22.536556 containerd[1680]: time="2026-01-20T06:56:22.536525123Z" level=info msg="CreateContainer within sandbox \"79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 06:56:22.551146 systemd[1]: Started cri-containerd-22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4.scope - libcontainer container 22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4. Jan 20 06:56:22.557227 containerd[1680]: time="2026-01-20T06:56:22.557194253Z" level=info msg="Container 02fc9f914cf47f4082ed5fd52f205f2543a5675a6e5586cb2c7ee420b0b4ec7c: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:56:22.567403 containerd[1680]: time="2026-01-20T06:56:22.567347312Z" level=info msg="CreateContainer within sandbox \"79e6b13bf6780367f07aee72eee19319279f931005cbb4a0cc0065ecf424254c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02fc9f914cf47f4082ed5fd52f205f2543a5675a6e5586cb2c7ee420b0b4ec7c\"" Jan 20 06:56:22.567000 audit: BPF prog-id=236 op=LOAD Jan 20 06:56:22.570096 kernel: kauditd_printk_skb: 365 callbacks suppressed Jan 20 06:56:22.570161 kernel: audit: type=1334 audit(1768892182.567:704): prog-id=236 op=LOAD Jan 20 06:56:22.571166 containerd[1680]: time="2026-01-20T06:56:22.571138891Z" level=info msg="StartContainer for \"02fc9f914cf47f4082ed5fd52f205f2543a5675a6e5586cb2c7ee420b0b4ec7c\"" Jan 20 06:56:22.570000 audit: BPF prog-id=237 op=LOAD Jan 20 06:56:22.573564 containerd[1680]: time="2026-01-20T06:56:22.573131041Z" level=info msg="connecting to shim 02fc9f914cf47f4082ed5fd52f205f2543a5675a6e5586cb2c7ee420b0b4ec7c" address="unix:///run/containerd/s/6881709f28f6a3a8ec7f3cfa6e73b923812788e5b2afab3e0a55fbd440a58394" protocol=ttrpc version=3 Jan 20 06:56:22.570000 audit[4905]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4888 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.574556 kernel: audit: type=1334 audit(1768892182.570:705): prog-id=237 op=LOAD Jan 20 06:56:22.574613 kernel: audit: type=1300 audit(1768892182.570:705): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4888 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232666234393335643034373434646365636263393031623238613430 Jan 20 06:56:22.578849 kernel: audit: type=1327 audit(1768892182.570:705): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232666234393335643034373434646365636263393031623238613430 Jan 20 06:56:22.571000 audit: BPF prog-id=237 op=UNLOAD Jan 20 06:56:22.582044 kernel: audit: type=1334 audit(1768892182.571:706): prog-id=237 op=UNLOAD Jan 20 06:56:22.571000 audit[4905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4888 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.583123 kernel: audit: type=1300 audit(1768892182.571:706): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4888 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232666234393335643034373434646365636263393031623238613430 Jan 20 06:56:22.586613 kernel: audit: type=1327 audit(1768892182.571:706): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232666234393335643034373434646365636263393031623238613430 Jan 20 06:56:22.571000 audit: BPF prog-id=238 op=LOAD Jan 20 06:56:22.589949 kernel: audit: type=1334 audit(1768892182.571:707): prog-id=238 op=LOAD Jan 20 06:56:22.590027 kernel: audit: type=1300 audit(1768892182.571:707): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4888 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.571000 audit[4905]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4888 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232666234393335643034373434646365636263393031623238613430 Jan 20 06:56:22.595523 kernel: audit: type=1327 audit(1768892182.571:707): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232666234393335643034373434646365636263393031623238613430 Jan 20 06:56:22.571000 audit: BPF prog-id=239 op=LOAD Jan 20 06:56:22.571000 audit[4905]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4888 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232666234393335643034373434646365636263393031623238613430 Jan 20 06:56:22.571000 audit: BPF prog-id=239 op=UNLOAD Jan 20 06:56:22.571000 audit[4905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4888 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232666234393335643034373434646365636263393031623238613430 Jan 20 06:56:22.571000 audit: BPF prog-id=238 op=UNLOAD Jan 20 06:56:22.571000 audit[4905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4888 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232666234393335643034373434646365636263393031623238613430 Jan 20 06:56:22.571000 audit: BPF prog-id=240 op=LOAD Jan 20 06:56:22.571000 audit[4905]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4888 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232666234393335643034373434646365636263393031623238613430 Jan 20 06:56:22.609153 systemd[1]: Started cri-containerd-02fc9f914cf47f4082ed5fd52f205f2543a5675a6e5586cb2c7ee420b0b4ec7c.scope - libcontainer container 02fc9f914cf47f4082ed5fd52f205f2543a5675a6e5586cb2c7ee420b0b4ec7c. Jan 20 06:56:22.626668 containerd[1680]: time="2026-01-20T06:56:22.626614535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-jx5cs,Uid:de3eadd9-d35e-43b1-acf5-88fe04381bf9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"22fb4935d04744dcecbc901b28a400fb980e499baa911e6a783445f3e5e7e3b4\"" Jan 20 06:56:22.626000 audit: BPF prog-id=241 op=LOAD Jan 20 06:56:22.626000 audit: BPF prog-id=242 op=LOAD Jan 20 06:56:22.626000 audit[4928]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4839 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666339663931346366343766343038326564356664353266323035 Jan 20 06:56:22.627000 audit: BPF prog-id=242 op=UNLOAD Jan 20 06:56:22.627000 audit[4928]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4839 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666339663931346366343766343038326564356664353266323035 Jan 20 06:56:22.627000 audit: BPF prog-id=243 op=LOAD Jan 20 06:56:22.627000 audit[4928]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4839 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666339663931346366343766343038326564356664353266323035 Jan 20 06:56:22.627000 audit: BPF prog-id=244 op=LOAD Jan 20 06:56:22.627000 audit[4928]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4839 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666339663931346366343766343038326564356664353266323035 Jan 20 06:56:22.628000 audit: BPF prog-id=244 op=UNLOAD Jan 20 06:56:22.628000 audit[4928]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4839 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666339663931346366343766343038326564356664353266323035 Jan 20 06:56:22.628000 audit: BPF prog-id=243 op=UNLOAD Jan 20 06:56:22.628000 audit[4928]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4839 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666339663931346366343766343038326564356664353266323035 Jan 20 06:56:22.628000 audit: BPF prog-id=245 op=LOAD Jan 20 06:56:22.628000 audit[4928]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4839 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:22.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032666339663931346366343766343038326564356664353266323035 Jan 20 06:56:22.630085 containerd[1680]: time="2026-01-20T06:56:22.628522159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:56:22.648662 containerd[1680]: time="2026-01-20T06:56:22.648587379Z" level=info msg="StartContainer for \"02fc9f914cf47f4082ed5fd52f205f2543a5675a6e5586cb2c7ee420b0b4ec7c\" returns successfully" Jan 20 06:56:22.672078 systemd-networkd[1581]: cali970820a5a4d: Gained IPv6LL Jan 20 06:56:22.967607 containerd[1680]: time="2026-01-20T06:56:22.967364235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:22.969036 containerd[1680]: time="2026-01-20T06:56:22.968989309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:56:22.969198 containerd[1680]: time="2026-01-20T06:56:22.969073499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:22.969393 kubelet[2906]: E0120 06:56:22.969340 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:56:22.969813 kubelet[2906]: E0120 06:56:22.969400 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:56:22.969813 kubelet[2906]: E0120 06:56:22.969638 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkgj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597cf6c9f4-jx5cs_calico-apiserver(de3eadd9-d35e-43b1-acf5-88fe04381bf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:22.971214 kubelet[2906]: E0120 06:56:22.971155 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:56:23.210092 containerd[1680]: time="2026-01-20T06:56:23.209976820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-c4gmt,Uid:52f9dd62-5a28-4d34-8a7e-35c040c0ecfe,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:56:23.210384 containerd[1680]: time="2026-01-20T06:56:23.210349597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-676fb446dd-r6rmf,Uid:e2f4d2c7-d335-42bb-9262-2a522436304e,Namespace:calico-system,Attempt:0,}" Jan 20 06:56:23.324069 systemd-networkd[1581]: calicc4591fae6e: Link UP Jan 20 06:56:23.324893 systemd-networkd[1581]: calicc4591fae6e: Gained carrier Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.253 [INFO][4978] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0 calico-kube-controllers-676fb446dd- calico-system e2f4d2c7-d335-42bb-9262-2a522436304e 808 0 2026-01-20 06:55:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:676fb446dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4585-0-0-n-f719bce5cf calico-kube-controllers-676fb446dd-r6rmf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicc4591fae6e [] [] }} ContainerID="39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" Namespace="calico-system" Pod="calico-kube-controllers-676fb446dd-r6rmf" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.254 [INFO][4978] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" Namespace="calico-system" Pod="calico-kube-controllers-676fb446dd-r6rmf" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.287 [INFO][4994] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" HandleID="k8s-pod-network.39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" Workload="ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.287 [INFO][4994] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" HandleID="k8s-pod-network.39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" Workload="ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf880), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4585-0-0-n-f719bce5cf", "pod":"calico-kube-controllers-676fb446dd-r6rmf", "timestamp":"2026-01-20 06:56:23.287409955 +0000 UTC"}, Hostname:"ci-4585-0-0-n-f719bce5cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.287 [INFO][4994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.287 [INFO][4994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.287 [INFO][4994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585-0-0-n-f719bce5cf' Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.293 [INFO][4994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.297 [INFO][4994] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.300 [INFO][4994] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.302 [INFO][4994] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.303 [INFO][4994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.304 [INFO][4994] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.305 [INFO][4994] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4 Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.311 [INFO][4994] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.318 [INFO][4994] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.135/26] block=192.168.93.128/26 handle="k8s-pod-network.39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.318 [INFO][4994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.135/26] handle="k8s-pod-network.39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.318 [INFO][4994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:56:23.341780 containerd[1680]: 2026-01-20 06:56:23.318 [INFO][4994] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.135/26] IPv6=[] ContainerID="39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" HandleID="k8s-pod-network.39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" Workload="ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0" Jan 20 06:56:23.342358 containerd[1680]: 2026-01-20 06:56:23.321 [INFO][4978] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" Namespace="calico-system" Pod="calico-kube-controllers-676fb446dd-r6rmf" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0", GenerateName:"calico-kube-controllers-676fb446dd-", Namespace:"calico-system", SelfLink:"", UID:"e2f4d2c7-d335-42bb-9262-2a522436304e", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"676fb446dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"", Pod:"calico-kube-controllers-676fb446dd-r6rmf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc4591fae6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:23.342358 containerd[1680]: 2026-01-20 06:56:23.321 [INFO][4978] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.135/32] ContainerID="39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" Namespace="calico-system" Pod="calico-kube-controllers-676fb446dd-r6rmf" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0" Jan 20 06:56:23.342358 containerd[1680]: 2026-01-20 06:56:23.321 [INFO][4978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc4591fae6e ContainerID="39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" Namespace="calico-system" Pod="calico-kube-controllers-676fb446dd-r6rmf" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0" Jan 20 06:56:23.342358 containerd[1680]: 2026-01-20 06:56:23.326 [INFO][4978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" Namespace="calico-system" Pod="calico-kube-controllers-676fb446dd-r6rmf" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0" Jan 20 06:56:23.342358 containerd[1680]: 2026-01-20 06:56:23.327 [INFO][4978] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" Namespace="calico-system" Pod="calico-kube-controllers-676fb446dd-r6rmf" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0", GenerateName:"calico-kube-controllers-676fb446dd-", Namespace:"calico-system", SelfLink:"", UID:"e2f4d2c7-d335-42bb-9262-2a522436304e", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"676fb446dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4", Pod:"calico-kube-controllers-676fb446dd-r6rmf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc4591fae6e", MAC:"52:a4:4d:3a:95:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:23.342358 containerd[1680]: 2026-01-20 06:56:23.337 [INFO][4978] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" Namespace="calico-system" Pod="calico-kube-controllers-676fb446dd-r6rmf" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--kube--controllers--676fb446dd--r6rmf-eth0" Jan 20 06:56:23.355000 audit[5016]: NETFILTER_CFG table=filter:133 family=2 entries=58 op=nft_register_chain pid=5016 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:56:23.355000 audit[5016]: SYSCALL arch=c000003e syscall=46 success=yes exit=27164 a0=3 a1=7ffcf23818e0 a2=0 a3=7ffcf23818cc items=0 ppid=4267 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.355000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:56:23.368683 containerd[1680]: time="2026-01-20T06:56:23.368614732Z" level=info msg="connecting to shim 39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4" address="unix:///run/containerd/s/da0981201f7435dafbcb37664a13954919c8b45609363eb6a1d62ed54dd53517" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:56:23.387910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount814951431.mount: Deactivated successfully. Jan 20 06:56:23.394257 systemd[1]: Started cri-containerd-39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4.scope - libcontainer container 39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4. Jan 20 06:56:23.427888 kubelet[2906]: E0120 06:56:23.427489 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:56:23.433598 kubelet[2906]: E0120 06:56:23.433559 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:56:23.451376 systemd-networkd[1581]: calie4f51c25078: Link UP Jan 20 06:56:23.453118 systemd-networkd[1581]: calie4f51c25078: Gained carrier Jan 20 06:56:23.469000 audit: BPF prog-id=246 op=LOAD Jan 20 06:56:23.470000 audit: BPF prog-id=247 op=LOAD Jan 20 06:56:23.470000 audit[5037]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5024 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663433643335303762643936643964316233326165363365353038 Jan 20 06:56:23.470000 audit: BPF prog-id=247 op=UNLOAD Jan 20 06:56:23.470000 audit[5037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663433643335303762643936643964316233326165363365353038 Jan 20 06:56:23.470000 audit: BPF prog-id=248 op=LOAD Jan 20 06:56:23.470000 audit[5037]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5024 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663433643335303762643936643964316233326165363365353038 Jan 20 06:56:23.470000 audit: BPF prog-id=249 op=LOAD Jan 20 06:56:23.470000 audit[5037]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5024 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663433643335303762643936643964316233326165363365353038 Jan 20 06:56:23.470000 audit: BPF prog-id=249 op=UNLOAD Jan 20 06:56:23.470000 audit[5037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663433643335303762643936643964316233326165363365353038 Jan 20 06:56:23.470000 audit: BPF prog-id=248 op=UNLOAD Jan 20 06:56:23.470000 audit[5037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663433643335303762643936643964316233326165363365353038 Jan 20 06:56:23.470000 audit: BPF prog-id=250 op=LOAD Jan 20 06:56:23.470000 audit[5037]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5024 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339663433643335303762643936643964316233326165363365353038 Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.256 [INFO][4968] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0 calico-apiserver-597cf6c9f4- calico-apiserver 52f9dd62-5a28-4d34-8a7e-35c040c0ecfe 805 0 2026-01-20 06:55:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:597cf6c9f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4585-0-0-n-f719bce5cf calico-apiserver-597cf6c9f4-c4gmt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie4f51c25078 [] [] }} ContainerID="a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-c4gmt" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.256 [INFO][4968] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-c4gmt" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.288 [INFO][4999] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" HandleID="k8s-pod-network.a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" Workload="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.288 [INFO][4999] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" HandleID="k8s-pod-network.a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" Workload="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4585-0-0-n-f719bce5cf", "pod":"calico-apiserver-597cf6c9f4-c4gmt", "timestamp":"2026-01-20 06:56:23.288516401 +0000 UTC"}, Hostname:"ci-4585-0-0-n-f719bce5cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.288 [INFO][4999] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.318 [INFO][4999] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.318 [INFO][4999] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585-0-0-n-f719bce5cf' Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.396 [INFO][4999] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.401 [INFO][4999] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.408 [INFO][4999] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.413 [INFO][4999] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.419 [INFO][4999] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.419 [INFO][4999] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.422 [INFO][4999] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3 Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.427 [INFO][4999] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.438 [INFO][4999] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.136/26] block=192.168.93.128/26 handle="k8s-pod-network.a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.438 [INFO][4999] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.136/26] handle="k8s-pod-network.a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" host="ci-4585-0-0-n-f719bce5cf" Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.438 [INFO][4999] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:56:23.478055 containerd[1680]: 2026-01-20 06:56:23.438 [INFO][4999] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.136/26] IPv6=[] ContainerID="a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" HandleID="k8s-pod-network.a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" Workload="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0" Jan 20 06:56:23.478532 containerd[1680]: 2026-01-20 06:56:23.441 [INFO][4968] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-c4gmt" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0", GenerateName:"calico-apiserver-597cf6c9f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"52f9dd62-5a28-4d34-8a7e-35c040c0ecfe", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597cf6c9f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"", Pod:"calico-apiserver-597cf6c9f4-c4gmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4f51c25078", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:23.478532 containerd[1680]: 2026-01-20 06:56:23.442 [INFO][4968] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.136/32] ContainerID="a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-c4gmt" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0" Jan 20 06:56:23.478532 containerd[1680]: 2026-01-20 06:56:23.442 [INFO][4968] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4f51c25078 ContainerID="a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-c4gmt" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0" Jan 20 06:56:23.478532 containerd[1680]: 2026-01-20 06:56:23.455 [INFO][4968] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-c4gmt" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0" Jan 20 06:56:23.478532 containerd[1680]: 2026-01-20 06:56:23.456 [INFO][4968] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-c4gmt" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0", GenerateName:"calico-apiserver-597cf6c9f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"52f9dd62-5a28-4d34-8a7e-35c040c0ecfe", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"597cf6c9f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585-0-0-n-f719bce5cf", ContainerID:"a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3", Pod:"calico-apiserver-597cf6c9f4-c4gmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4f51c25078", MAC:"96:67:59:ab:af:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:56:23.478532 containerd[1680]: 2026-01-20 06:56:23.474 [INFO][4968] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" Namespace="calico-apiserver" Pod="calico-apiserver-597cf6c9f4-c4gmt" WorkloadEndpoint="ci--4585--0--0--n--f719bce5cf-k8s-calico--apiserver--597cf6c9f4--c4gmt-eth0" Jan 20 06:56:23.482000 audit[5059]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=5059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:23.482000 audit[5059]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd05a2d5d0 a2=0 a3=7ffd05a2d5bc items=0 ppid=3052 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:23.492101 kubelet[2906]: I0120 06:56:23.491889 2906 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g5pgm" podStartSLOduration=57.491619595 podStartE2EDuration="57.491619595s" podCreationTimestamp="2026-01-20 06:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:56:23.49030904 +0000 UTC m=+62.401946824" watchObservedRunningTime="2026-01-20 06:56:23.491619595 +0000 UTC m=+62.403257376" Jan 20 06:56:23.496000 audit[5059]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=5059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:23.496000 audit[5059]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd05a2d5d0 a2=0 a3=7ffd05a2d5bc items=0 ppid=3052 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.496000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:23.519367 containerd[1680]: time="2026-01-20T06:56:23.519260564Z" level=info msg="connecting to shim a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3" address="unix:///run/containerd/s/7e5bd8cd9998c007236b2af8ce8999c133b1dd59b0ff2d4abd9ed002c5453d4e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:56:23.541000 audit[5085]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5085 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:23.541000 audit[5085]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc00c28130 a2=0 a3=7ffc00c2811c items=0 ppid=3052 pid=5085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:23.559444 systemd[1]: Started cri-containerd-a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3.scope - libcontainer container a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3. Jan 20 06:56:23.558000 audit[5096]: NETFILTER_CFG table=filter:137 family=2 entries=53 op=nft_register_chain pid=5096 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:56:23.558000 audit[5096]: SYSCALL arch=c000003e syscall=46 success=yes exit=26608 a0=3 a1=7ffc1b4f71e0 a2=0 a3=7ffc1b4f71cc items=0 ppid=4267 pid=5096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.558000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:56:23.568357 systemd-networkd[1581]: cali14ded684e99: Gained IPv6LL Jan 20 06:56:23.569000 audit[5085]: NETFILTER_CFG table=nat:138 family=2 entries=56 op=nft_register_chain pid=5085 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:23.569000 audit[5085]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc00c28130 a2=0 a3=7ffc00c2811c items=0 ppid=3052 pid=5085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.569000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:23.603000 audit: BPF prog-id=251 op=LOAD Jan 20 06:56:23.604000 audit: BPF prog-id=252 op=LOAD Jan 20 06:56:23.604000 audit[5084]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5074 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131343538343834626237306566623865643665366138646364343661 Jan 20 06:56:23.605000 audit: BPF prog-id=252 op=UNLOAD Jan 20 06:56:23.605000 audit[5084]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5074 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131343538343834626237306566623865643665366138646364343661 Jan 20 06:56:23.605000 audit: BPF prog-id=253 op=LOAD Jan 20 06:56:23.605000 audit[5084]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5074 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131343538343834626237306566623865643665366138646364343661 Jan 20 06:56:23.605000 audit: BPF prog-id=254 op=LOAD Jan 20 06:56:23.605000 audit[5084]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5074 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131343538343834626237306566623865643665366138646364343661 Jan 20 06:56:23.605000 audit: BPF prog-id=254 op=UNLOAD Jan 20 06:56:23.605000 audit[5084]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5074 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131343538343834626237306566623865643665366138646364343661 Jan 20 06:56:23.605000 audit: BPF prog-id=253 op=UNLOAD Jan 20 06:56:23.605000 audit[5084]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5074 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131343538343834626237306566623865643665366138646364343661 Jan 20 06:56:23.605000 audit: BPF prog-id=255 op=LOAD Jan 20 06:56:23.605000 audit[5084]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5074 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:23.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131343538343834626237306566623865643665366138646364343661 Jan 20 06:56:23.627278 containerd[1680]: time="2026-01-20T06:56:23.627162081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-676fb446dd-r6rmf,Uid:e2f4d2c7-d335-42bb-9262-2a522436304e,Namespace:calico-system,Attempt:0,} returns sandbox id \"39f43d3507bd96d9d1b32ae63e5086a6273264e79eade08c05a003ea4fec96b4\"" Jan 20 06:56:23.633187 containerd[1680]: time="2026-01-20T06:56:23.633136146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:56:23.661601 containerd[1680]: time="2026-01-20T06:56:23.661557490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-597cf6c9f4-c4gmt,Uid:52f9dd62-5a28-4d34-8a7e-35c040c0ecfe,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a1458484bb70efb8ed6e6a8dcd46ac2633d79feb91e433f849942ef6091216a3\"" Jan 20 06:56:23.957688 containerd[1680]: time="2026-01-20T06:56:23.957524765Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:23.959333 containerd[1680]: time="2026-01-20T06:56:23.959215978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:56:23.959333 containerd[1680]: time="2026-01-20T06:56:23.959305209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:23.959507 kubelet[2906]: E0120 06:56:23.959469 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:56:23.959564 kubelet[2906]: E0120 06:56:23.959513 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:56:23.959803 kubelet[2906]: E0120 06:56:23.959770 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htbc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-676fb446dd-r6rmf_calico-system(e2f4d2c7-d335-42bb-9262-2a522436304e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:23.959935 containerd[1680]: time="2026-01-20T06:56:23.959886664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:56:23.961786 kubelet[2906]: E0120 06:56:23.961736 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:56:24.295866 containerd[1680]: time="2026-01-20T06:56:24.295742583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:24.298257 containerd[1680]: time="2026-01-20T06:56:24.298132176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:56:24.298257 containerd[1680]: time="2026-01-20T06:56:24.298153120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:24.298862 kubelet[2906]: E0120 06:56:24.298467 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:56:24.298862 kubelet[2906]: E0120 06:56:24.298506 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:56:24.298862 kubelet[2906]: E0120 06:56:24.298630 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597cf6c9f4-c4gmt_calico-apiserver(52f9dd62-5a28-4d34-8a7e-35c040c0ecfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:24.300661 kubelet[2906]: E0120 06:56:24.300590 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:56:24.336137 systemd-networkd[1581]: cali63e868f6f66: Gained IPv6LL Jan 20 06:56:24.435007 kubelet[2906]: E0120 06:56:24.434979 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:56:24.435798 kubelet[2906]: E0120 06:56:24.435762 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:56:24.436642 kubelet[2906]: E0120 06:56:24.436626 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:56:24.528007 systemd-networkd[1581]: calie4f51c25078: Gained IPv6LL Jan 20 06:56:24.597000 audit[5123]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5123 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:24.597000 audit[5123]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff2d4f5ba0 a2=0 a3=7fff2d4f5b8c items=0 ppid=3052 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:24.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:24.604000 audit[5123]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5123 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:56:24.604000 audit[5123]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff2d4f5ba0 a2=0 a3=7fff2d4f5b8c items=0 ppid=3052 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:56:24.604000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:56:25.296027 systemd-networkd[1581]: calicc4591fae6e: Gained IPv6LL Jan 20 06:56:25.438307 kubelet[2906]: E0120 06:56:25.438268 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:56:25.439445 kubelet[2906]: E0120 06:56:25.439029 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:56:33.214059 containerd[1680]: time="2026-01-20T06:56:33.214011932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:56:33.566586 containerd[1680]: time="2026-01-20T06:56:33.566465561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:33.568596 containerd[1680]: time="2026-01-20T06:56:33.568565135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:56:33.568666 containerd[1680]: time="2026-01-20T06:56:33.568633995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:33.568802 kubelet[2906]: E0120 06:56:33.568757 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:56:33.569176 kubelet[2906]: E0120 06:56:33.568814 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:56:33.569176 kubelet[2906]: E0120 06:56:33.568921 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:037c0c1e8cbe4b15a6588620b1762857,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nkfck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79cc8b6859-6sdc2_calico-system(eb945f54-1cfd-4248-85ea-34b880e5b4b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:33.570968 containerd[1680]: time="2026-01-20T06:56:33.570913815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:56:33.907483 containerd[1680]: time="2026-01-20T06:56:33.907442626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:33.909082 containerd[1680]: time="2026-01-20T06:56:33.909054275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:56:33.909197 containerd[1680]: time="2026-01-20T06:56:33.909118283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:33.909403 kubelet[2906]: E0120 06:56:33.909337 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:56:33.909403 kubelet[2906]: E0120 06:56:33.909379 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:56:33.909626 kubelet[2906]: E0120 06:56:33.909580 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkfck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79cc8b6859-6sdc2_calico-system(eb945f54-1cfd-4248-85ea-34b880e5b4b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:33.911074 kubelet[2906]: E0120 06:56:33.910983 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:56:36.210930 containerd[1680]: time="2026-01-20T06:56:36.210807097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:56:36.540229 containerd[1680]: time="2026-01-20T06:56:36.539986957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:36.542113 containerd[1680]: time="2026-01-20T06:56:36.541995750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:56:36.542113 containerd[1680]: time="2026-01-20T06:56:36.542007496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:36.542255 kubelet[2906]: E0120 06:56:36.542207 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:56:36.542255 kubelet[2906]: E0120 06:56:36.542246 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:56:36.542591 kubelet[2906]: E0120 06:56:36.542426 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vggc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:36.543774 containerd[1680]: time="2026-01-20T06:56:36.543010284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:56:36.883805 containerd[1680]: time="2026-01-20T06:56:36.883393762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:36.885238 containerd[1680]: time="2026-01-20T06:56:36.885139971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:56:36.885238 containerd[1680]: time="2026-01-20T06:56:36.885214050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:36.885369 kubelet[2906]: E0120 06:56:36.885326 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:56:36.885409 kubelet[2906]: E0120 06:56:36.885371 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:56:36.885594 kubelet[2906]: E0120 06:56:36.885553 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkgj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597cf6c9f4-jx5cs_calico-apiserver(de3eadd9-d35e-43b1-acf5-88fe04381bf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:36.886191 containerd[1680]: time="2026-01-20T06:56:36.886147918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:56:36.887283 kubelet[2906]: E0120 06:56:36.887207 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:56:37.233712 containerd[1680]: time="2026-01-20T06:56:37.233555562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:37.236268 containerd[1680]: time="2026-01-20T06:56:37.236178516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:56:37.236374 containerd[1680]: time="2026-01-20T06:56:37.236344801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:37.236700 kubelet[2906]: E0120 06:56:37.236533 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:56:37.236700 kubelet[2906]: E0120 06:56:37.236574 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:56:37.237088 kubelet[2906]: E0120 06:56:37.236785 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-khrnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8j6t9_calico-system(3f79bfe2-fd9a-4aff-ac23-745eeb4426b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:37.237730 containerd[1680]: time="2026-01-20T06:56:37.236921293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:56:37.238943 kubelet[2906]: E0120 06:56:37.238915 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:56:37.562624 containerd[1680]: time="2026-01-20T06:56:37.561992806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:37.564151 containerd[1680]: time="2026-01-20T06:56:37.564014213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:56:37.564151 containerd[1680]: time="2026-01-20T06:56:37.564120975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:37.565195 kubelet[2906]: E0120 06:56:37.565134 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:56:37.565507 kubelet[2906]: E0120 06:56:37.565195 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:56:37.565507 kubelet[2906]: E0120 06:56:37.565441 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vggc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:37.565996 containerd[1680]: time="2026-01-20T06:56:37.565975349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:56:37.567403 kubelet[2906]: E0120 06:56:37.567371 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:56:37.908473 containerd[1680]: time="2026-01-20T06:56:37.908303082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:37.909929 containerd[1680]: time="2026-01-20T06:56:37.909892189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:56:37.910117 containerd[1680]: time="2026-01-20T06:56:37.909922975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:37.910272 kubelet[2906]: E0120 06:56:37.910234 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:56:37.910317 kubelet[2906]: E0120 06:56:37.910285 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:56:37.910469 kubelet[2906]: E0120 06:56:37.910419 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597cf6c9f4-c4gmt_calico-apiserver(52f9dd62-5a28-4d34-8a7e-35c040c0ecfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:37.911951 kubelet[2906]: E0120 06:56:37.911913 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:56:38.211282 containerd[1680]: time="2026-01-20T06:56:38.210751522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:56:38.552522 containerd[1680]: time="2026-01-20T06:56:38.552125702Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:38.554335 containerd[1680]: time="2026-01-20T06:56:38.554175965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:56:38.554335 containerd[1680]: time="2026-01-20T06:56:38.554182867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:38.554705 kubelet[2906]: E0120 06:56:38.554650 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:56:38.554790 kubelet[2906]: E0120 06:56:38.554709 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:56:38.554920 kubelet[2906]: E0120 06:56:38.554873 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htbc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-676fb446dd-r6rmf_calico-system(e2f4d2c7-d335-42bb-9262-2a522436304e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:38.556255 kubelet[2906]: E0120 06:56:38.556205 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:56:47.212707 kubelet[2906]: E0120 06:56:47.212594 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:56:48.211900 kubelet[2906]: E0120 06:56:48.211622 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:56:49.211490 kubelet[2906]: E0120 06:56:49.211181 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:56:50.210949 kubelet[2906]: E0120 06:56:50.210883 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:56:51.210756 kubelet[2906]: E0120 06:56:51.210395 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:56:51.211651 kubelet[2906]: E0120 06:56:51.211623 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:56:58.211853 containerd[1680]: time="2026-01-20T06:56:58.211085788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:56:58.553374 containerd[1680]: time="2026-01-20T06:56:58.552966852Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:58.554804 containerd[1680]: time="2026-01-20T06:56:58.554722382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:56:58.554804 containerd[1680]: time="2026-01-20T06:56:58.554757384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:58.555045 kubelet[2906]: E0120 06:56:58.554991 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:56:58.555356 kubelet[2906]: E0120 06:56:58.555066 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:56:58.555356 kubelet[2906]: E0120 06:56:58.555241 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:037c0c1e8cbe4b15a6588620b1762857,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nkfck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79cc8b6859-6sdc2_calico-system(eb945f54-1cfd-4248-85ea-34b880e5b4b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:58.557544 containerd[1680]: time="2026-01-20T06:56:58.557505348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:56:58.896192 containerd[1680]: time="2026-01-20T06:56:58.895765300Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:56:58.897730 containerd[1680]: time="2026-01-20T06:56:58.897626434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:56:58.897730 containerd[1680]: time="2026-01-20T06:56:58.897703420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:56:58.897902 kubelet[2906]: E0120 06:56:58.897868 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:56:58.897968 kubelet[2906]: E0120 06:56:58.897922 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:56:58.898086 kubelet[2906]: E0120 06:56:58.898017 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkfck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79cc8b6859-6sdc2_calico-system(eb945f54-1cfd-4248-85ea-34b880e5b4b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:56:58.899187 kubelet[2906]: E0120 06:56:58.899156 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:57:00.212094 containerd[1680]: time="2026-01-20T06:57:00.211864356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:57:00.548519 containerd[1680]: time="2026-01-20T06:57:00.548203085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:00.549984 containerd[1680]: time="2026-01-20T06:57:00.549908176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:57:00.550035 containerd[1680]: time="2026-01-20T06:57:00.549984277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:00.550198 kubelet[2906]: E0120 06:57:00.550140 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:57:00.550433 kubelet[2906]: E0120 06:57:00.550201 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:57:00.550433 kubelet[2906]: E0120 06:57:00.550321 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkgj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597cf6c9f4-jx5cs_calico-apiserver(de3eadd9-d35e-43b1-acf5-88fe04381bf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:00.551727 kubelet[2906]: E0120 06:57:00.551662 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:57:02.211728 containerd[1680]: time="2026-01-20T06:57:02.211695486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:57:02.565206 containerd[1680]: time="2026-01-20T06:57:02.565056968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:02.566842 containerd[1680]: time="2026-01-20T06:57:02.566580141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:57:02.566842 containerd[1680]: time="2026-01-20T06:57:02.566619827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:02.566953 kubelet[2906]: E0120 06:57:02.566818 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:57:02.566953 kubelet[2906]: E0120 06:57:02.566878 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:57:02.567273 kubelet[2906]: E0120 06:57:02.567066 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-khrnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8j6t9_calico-system(3f79bfe2-fd9a-4aff-ac23-745eeb4426b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:02.567653 containerd[1680]: time="2026-01-20T06:57:02.567632129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:57:02.569119 kubelet[2906]: E0120 06:57:02.569016 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:57:02.889185 containerd[1680]: time="2026-01-20T06:57:02.888703932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:02.890459 containerd[1680]: time="2026-01-20T06:57:02.890404219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:57:02.890520 containerd[1680]: time="2026-01-20T06:57:02.890441221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:02.890669 kubelet[2906]: E0120 06:57:02.890639 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:57:02.890709 kubelet[2906]: E0120 06:57:02.890680 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:57:02.890822 kubelet[2906]: E0120 06:57:02.890786 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597cf6c9f4-c4gmt_calico-apiserver(52f9dd62-5a28-4d34-8a7e-35c040c0ecfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:02.891903 kubelet[2906]: E0120 06:57:02.891881 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:57:03.210600 containerd[1680]: time="2026-01-20T06:57:03.210496375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:57:03.536256 containerd[1680]: time="2026-01-20T06:57:03.536150885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:03.537852 containerd[1680]: time="2026-01-20T06:57:03.537808665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:57:03.537931 containerd[1680]: time="2026-01-20T06:57:03.537888207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:03.538096 kubelet[2906]: E0120 06:57:03.538064 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:57:03.538232 kubelet[2906]: E0120 06:57:03.538177 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:57:03.538386 kubelet[2906]: E0120 06:57:03.538344 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vggc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:03.540330 containerd[1680]: time="2026-01-20T06:57:03.540263382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:57:03.880706 containerd[1680]: time="2026-01-20T06:57:03.880403928Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:03.882641 containerd[1680]: time="2026-01-20T06:57:03.882555323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:57:03.882717 containerd[1680]: time="2026-01-20T06:57:03.882630907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:03.882986 kubelet[2906]: E0120 06:57:03.882919 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:57:03.882986 kubelet[2906]: E0120 06:57:03.882961 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:57:03.883492 kubelet[2906]: E0120 06:57:03.883365 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vggc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:03.884540 kubelet[2906]: E0120 06:57:03.884506 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:57:04.211871 containerd[1680]: time="2026-01-20T06:57:04.211646335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:57:04.551090 containerd[1680]: time="2026-01-20T06:57:04.550665075Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:04.553127 containerd[1680]: time="2026-01-20T06:57:04.552988518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:04.553127 containerd[1680]: time="2026-01-20T06:57:04.553032240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:57:04.553281 kubelet[2906]: E0120 06:57:04.553239 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:57:04.553435 kubelet[2906]: E0120 06:57:04.553296 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:57:04.553588 kubelet[2906]: E0120 06:57:04.553422 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htbc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-676fb446dd-r6rmf_calico-system(e2f4d2c7-d335-42bb-9262-2a522436304e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:04.554911 kubelet[2906]: E0120 06:57:04.554877 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:57:10.211624 kubelet[2906]: E0120 06:57:10.211557 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:57:13.211426 kubelet[2906]: E0120 06:57:13.211379 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:57:15.211840 kubelet[2906]: E0120 06:57:15.211696 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:57:15.213135 kubelet[2906]: E0120 06:57:15.212466 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:57:16.210622 kubelet[2906]: E0120 06:57:16.210577 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:57:19.211849 kubelet[2906]: E0120 06:57:19.211658 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:57:25.212849 kubelet[2906]: E0120 06:57:25.212367 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:57:26.210466 kubelet[2906]: E0120 06:57:26.210405 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:57:27.214574 kubelet[2906]: E0120 06:57:27.214435 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:57:29.213998 kubelet[2906]: E0120 06:57:29.213713 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:57:29.214371 kubelet[2906]: E0120 06:57:29.214117 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:57:30.211375 kubelet[2906]: E0120 06:57:30.211278 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:57:37.216961 kubelet[2906]: E0120 06:57:37.216790 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:57:38.211112 kubelet[2906]: E0120 06:57:38.211061 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:57:39.213202 kubelet[2906]: E0120 06:57:39.213166 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:57:40.210815 kubelet[2906]: E0120 06:57:40.210484 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:57:43.212051 kubelet[2906]: E0120 06:57:43.212003 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:57:45.211962 containerd[1680]: time="2026-01-20T06:57:45.210940508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:57:45.562818 containerd[1680]: time="2026-01-20T06:57:45.562297732Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:45.564450 containerd[1680]: time="2026-01-20T06:57:45.564344496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:57:45.564450 containerd[1680]: time="2026-01-20T06:57:45.564428354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:45.564606 kubelet[2906]: E0120 06:57:45.564565 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:57:45.564901 kubelet[2906]: E0120 06:57:45.564616 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:57:45.564982 kubelet[2906]: E0120 06:57:45.564924 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htbc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-676fb446dd-r6rmf_calico-system(e2f4d2c7-d335-42bb-9262-2a522436304e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:45.566124 kubelet[2906]: E0120 06:57:45.566082 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:57:49.211865 containerd[1680]: time="2026-01-20T06:57:49.211139386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:57:49.568417 containerd[1680]: time="2026-01-20T06:57:49.568083760Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:49.570065 containerd[1680]: time="2026-01-20T06:57:49.570019158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:57:49.570138 containerd[1680]: time="2026-01-20T06:57:49.570095190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:49.570297 kubelet[2906]: E0120 06:57:49.570259 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:57:49.571060 kubelet[2906]: E0120 06:57:49.570303 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:57:49.571060 kubelet[2906]: E0120 06:57:49.570450 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-khrnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8j6t9_calico-system(3f79bfe2-fd9a-4aff-ac23-745eeb4426b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:49.571632 kubelet[2906]: E0120 06:57:49.571606 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:57:50.210424 containerd[1680]: time="2026-01-20T06:57:50.210376775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:57:50.563384 containerd[1680]: time="2026-01-20T06:57:50.563196299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:50.567614 containerd[1680]: time="2026-01-20T06:57:50.567561383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:57:50.567749 containerd[1680]: time="2026-01-20T06:57:50.567658938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:50.567885 kubelet[2906]: E0120 06:57:50.567855 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:57:50.567931 kubelet[2906]: E0120 06:57:50.567897 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:57:50.568341 kubelet[2906]: E0120 06:57:50.568018 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:037c0c1e8cbe4b15a6588620b1762857,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nkfck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79cc8b6859-6sdc2_calico-system(eb945f54-1cfd-4248-85ea-34b880e5b4b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:50.569893 containerd[1680]: time="2026-01-20T06:57:50.569868282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:57:50.903017 containerd[1680]: time="2026-01-20T06:57:50.902554880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:50.904425 containerd[1680]: time="2026-01-20T06:57:50.904387863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:57:50.904492 containerd[1680]: time="2026-01-20T06:57:50.904471935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:50.905275 kubelet[2906]: E0120 06:57:50.905235 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:57:50.905558 kubelet[2906]: E0120 06:57:50.905289 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:57:50.905558 kubelet[2906]: E0120 06:57:50.905531 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkfck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79cc8b6859-6sdc2_calico-system(eb945f54-1cfd-4248-85ea-34b880e5b4b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:50.906849 kubelet[2906]: E0120 06:57:50.906786 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:57:52.210937 containerd[1680]: time="2026-01-20T06:57:52.210904057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:57:52.555294 containerd[1680]: time="2026-01-20T06:57:52.555070305Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:52.556927 containerd[1680]: time="2026-01-20T06:57:52.556880252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:57:52.557001 containerd[1680]: time="2026-01-20T06:57:52.556959070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:52.557121 kubelet[2906]: E0120 06:57:52.557088 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:57:52.557367 kubelet[2906]: E0120 06:57:52.557129 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:57:52.557367 kubelet[2906]: E0120 06:57:52.557238 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkgj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597cf6c9f4-jx5cs_calico-apiserver(de3eadd9-d35e-43b1-acf5-88fe04381bf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:52.558598 kubelet[2906]: E0120 06:57:52.558574 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:57:54.210534 containerd[1680]: time="2026-01-20T06:57:54.210446966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:57:54.553194 containerd[1680]: time="2026-01-20T06:57:54.552986334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:54.554818 containerd[1680]: time="2026-01-20T06:57:54.554717727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:57:54.554818 containerd[1680]: time="2026-01-20T06:57:54.554733481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:54.554984 kubelet[2906]: E0120 06:57:54.554942 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:57:54.554984 kubelet[2906]: E0120 06:57:54.554982 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:57:54.555249 kubelet[2906]: E0120 06:57:54.555094 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597cf6c9f4-c4gmt_calico-apiserver(52f9dd62-5a28-4d34-8a7e-35c040c0ecfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:54.557233 kubelet[2906]: E0120 06:57:54.557181 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:57:56.210894 containerd[1680]: time="2026-01-20T06:57:56.210438799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:57:56.544366 containerd[1680]: time="2026-01-20T06:57:56.544215882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:56.546261 containerd[1680]: time="2026-01-20T06:57:56.546177045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:57:56.546327 containerd[1680]: time="2026-01-20T06:57:56.546244527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:56.546491 kubelet[2906]: E0120 06:57:56.546464 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:57:56.547161 kubelet[2906]: E0120 06:57:56.546803 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:57:56.547161 kubelet[2906]: E0120 06:57:56.546948 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vggc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:56.549872 containerd[1680]: time="2026-01-20T06:57:56.549847733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:57:56.864975 containerd[1680]: time="2026-01-20T06:57:56.864851692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:57:56.866643 containerd[1680]: time="2026-01-20T06:57:56.866605091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:57:56.866643 containerd[1680]: time="2026-01-20T06:57:56.866671908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:57:56.867700 kubelet[2906]: E0120 06:57:56.867662 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:57:56.867776 kubelet[2906]: E0120 06:57:56.867710 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:57:56.868022 kubelet[2906]: E0120 06:57:56.867843 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vggc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:57:56.869269 kubelet[2906]: E0120 06:57:56.869241 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:57:58.210811 kubelet[2906]: E0120 06:57:58.210623 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:58:00.211729 kubelet[2906]: E0120 06:58:00.211462 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:58:05.212296 kubelet[2906]: E0120 06:58:05.212227 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:58:08.212299 kubelet[2906]: E0120 06:58:08.212256 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:58:08.212708 kubelet[2906]: E0120 06:58:08.212538 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:58:11.213777 kubelet[2906]: E0120 06:58:11.213498 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:58:11.215809 kubelet[2906]: E0120 06:58:11.215766 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:58:13.210550 kubelet[2906]: E0120 06:58:13.210091 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:58:20.211385 kubelet[2906]: E0120 06:58:20.211274 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:58:22.210936 kubelet[2906]: E0120 06:58:22.210815 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:58:23.211023 kubelet[2906]: E0120 06:58:23.210953 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:58:25.214170 kubelet[2906]: E0120 06:58:25.214110 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:58:25.214170 kubelet[2906]: E0120 06:58:25.214181 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:58:27.216103 kubelet[2906]: E0120 06:58:27.215943 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:58:35.211186 kubelet[2906]: E0120 06:58:35.211129 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:58:36.210141 kubelet[2906]: E0120 06:58:36.210103 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:58:37.211991 kubelet[2906]: E0120 06:58:37.211958 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:58:37.212350 kubelet[2906]: E0120 06:58:37.212245 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:58:38.210559 kubelet[2906]: E0120 06:58:38.210526 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:58:38.211196 kubelet[2906]: E0120 06:58:38.211120 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:58:46.211489 kubelet[2906]: E0120 06:58:46.211430 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:58:48.210856 kubelet[2906]: E0120 06:58:48.210493 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:58:49.210970 kubelet[2906]: E0120 06:58:49.210728 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:58:50.210428 kubelet[2906]: E0120 06:58:50.210394 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:58:51.213410 kubelet[2906]: E0120 06:58:51.213313 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:58:51.974165 kernel: kauditd_printk_skb: 102 callbacks suppressed Jan 20 06:58:51.974280 kernel: audit: type=1130 audit(1768892331.969:744): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.92:22-20.161.92.111:38942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:58:51.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.92:22-20.161.92.111:38942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:58:51.970075 systemd[1]: Started sshd@9-10.0.0.92:22-20.161.92.111:38942.service - OpenSSH per-connection server daemon (20.161.92.111:38942). Jan 20 06:58:52.211028 kubelet[2906]: E0120 06:58:52.210798 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:58:52.555000 audit[5342]: USER_ACCT pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.560914 kernel: audit: type=1101 audit(1768892332.555:745): pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.560981 sshd[5342]: Accepted publickey for core from 20.161.92.111 port 38942 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:58:52.560000 audit[5342]: CRED_ACQ pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.562365 sshd-session[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:58:52.564854 kernel: audit: type=1103 audit(1768892332.560:746): pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.560000 audit[5342]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3bf92470 a2=3 a3=0 items=0 ppid=1 pid=5342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:58:52.568371 kernel: audit: type=1006 audit(1768892332.560:747): pid=5342 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 06:58:52.568418 kernel: audit: type=1300 audit(1768892332.560:747): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3bf92470 a2=3 a3=0 items=0 ppid=1 pid=5342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:58:52.560000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:58:52.571890 kernel: audit: type=1327 audit(1768892332.560:747): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:58:52.575926 systemd-logind[1660]: New session 11 of user core. Jan 20 06:58:52.583054 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 06:58:52.587000 audit[5342]: USER_START pid=5342 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.593847 kernel: audit: type=1105 audit(1768892332.587:748): pid=5342 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.589000 audit[5346]: CRED_ACQ pid=5346 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.598902 kernel: audit: type=1103 audit(1768892332.589:749): pid=5346 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.952302 sshd[5346]: Connection closed by 20.161.92.111 port 38942 Jan 20 06:58:52.953996 sshd-session[5342]: pam_unix(sshd:session): session closed for user core Jan 20 06:58:52.955000 audit[5342]: USER_END pid=5342 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.959343 systemd-logind[1660]: Session 11 logged out. Waiting for processes to exit. Jan 20 06:58:52.961500 systemd[1]: sshd@9-10.0.0.92:22-20.161.92.111:38942.service: Deactivated successfully. Jan 20 06:58:52.962871 kernel: audit: type=1106 audit(1768892332.955:750): pid=5342 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.955000 audit[5342]: CRED_DISP pid=5342 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.966159 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 06:58:52.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.92:22-20.161.92.111:38942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:58:52.968412 kernel: audit: type=1104 audit(1768892332.955:751): pid=5342 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:52.969245 systemd-logind[1660]: Removed session 11. Jan 20 06:58:57.212067 kubelet[2906]: E0120 06:58:57.211932 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:58:58.064398 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:58:58.064612 kernel: audit: type=1130 audit(1768892338.062:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.92:22-20.161.92.111:51872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:58:58.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.92:22-20.161.92.111:51872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:58:58.062954 systemd[1]: Started sshd@10-10.0.0.92:22-20.161.92.111:51872.service - OpenSSH per-connection server daemon (20.161.92.111:51872). Jan 20 06:58:58.615000 audit[5360]: USER_ACCT pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:58.618415 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:58:58.619247 sshd[5360]: Accepted publickey for core from 20.161.92.111 port 51872 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:58:58.620952 kernel: audit: type=1101 audit(1768892338.615:754): pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:58.615000 audit[5360]: CRED_ACQ pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:58.624854 kernel: audit: type=1103 audit(1768892338.615:755): pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:58.631904 kernel: audit: type=1006 audit(1768892338.615:756): pid=5360 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 06:58:58.631986 kernel: audit: type=1300 audit(1768892338.615:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecad3ded0 a2=3 a3=0 items=0 ppid=1 pid=5360 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:58:58.615000 audit[5360]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecad3ded0 a2=3 a3=0 items=0 ppid=1 pid=5360 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:58:58.637189 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 06:58:58.638553 systemd-logind[1660]: New session 12 of user core. Jan 20 06:58:58.615000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:58:58.645060 kernel: audit: type=1327 audit(1768892338.615:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:58:58.642000 audit[5360]: USER_START pid=5360 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:58.650876 kernel: audit: type=1105 audit(1768892338.642:757): pid=5360 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:58.650000 audit[5364]: CRED_ACQ pid=5364 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:58.655857 kernel: audit: type=1103 audit(1768892338.650:758): pid=5364 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:59.004404 sshd[5364]: Connection closed by 20.161.92.111 port 51872 Jan 20 06:58:59.004225 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Jan 20 06:58:59.005000 audit[5360]: USER_END pid=5360 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:59.011931 kernel: audit: type=1106 audit(1768892339.005:759): pid=5360 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:59.011928 systemd-logind[1660]: Session 12 logged out. Waiting for processes to exit. Jan 20 06:58:59.006000 audit[5360]: CRED_DISP pid=5360 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:59.012976 systemd[1]: sshd@10-10.0.0.92:22-20.161.92.111:51872.service: Deactivated successfully. Jan 20 06:58:59.014516 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 06:58:59.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.92:22-20.161.92.111:51872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:58:59.016860 kernel: audit: type=1104 audit(1768892339.006:760): pid=5360 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:58:59.018070 systemd-logind[1660]: Removed session 12. Jan 20 06:58:59.211638 kubelet[2906]: E0120 06:58:59.211361 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:59:02.210861 kubelet[2906]: E0120 06:59:02.210739 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:59:02.211512 kubelet[2906]: E0120 06:59:02.211056 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:59:04.122417 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:59:04.122540 kernel: audit: type=1130 audit(1768892344.116:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.92:22-20.161.92.111:44716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:04.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.92:22-20.161.92.111:44716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:04.117092 systemd[1]: Started sshd@11-10.0.0.92:22-20.161.92.111:44716.service - OpenSSH per-connection server daemon (20.161.92.111:44716). Jan 20 06:59:04.210612 kubelet[2906]: E0120 06:59:04.210555 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:59:04.676000 audit[5385]: USER_ACCT pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:04.681866 kernel: audit: type=1101 audit(1768892344.676:763): pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:04.679992 sshd-session[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:04.682140 sshd[5385]: Accepted publickey for core from 20.161.92.111 port 44716 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:04.678000 audit[5385]: CRED_ACQ pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:04.686968 kernel: audit: type=1103 audit(1768892344.678:764): pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:04.690684 systemd-logind[1660]: New session 13 of user core. Jan 20 06:59:04.696527 kernel: audit: type=1006 audit(1768892344.678:765): pid=5385 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 20 06:59:04.696040 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 06:59:04.678000 audit[5385]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd67f3b250 a2=3 a3=0 items=0 ppid=1 pid=5385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:04.706990 kernel: audit: type=1300 audit(1768892344.678:765): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd67f3b250 a2=3 a3=0 items=0 ppid=1 pid=5385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:04.678000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:04.712866 kernel: audit: type=1327 audit(1768892344.678:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:04.707000 audit[5385]: USER_START pid=5385 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:04.718871 kernel: audit: type=1105 audit(1768892344.707:766): pid=5385 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:04.718933 kernel: audit: type=1103 audit(1768892344.709:767): pid=5392 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:04.709000 audit[5392]: CRED_ACQ pid=5392 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:05.038859 sshd[5392]: Connection closed by 20.161.92.111 port 44716 Jan 20 06:59:05.038301 sshd-session[5385]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:05.038000 audit[5385]: USER_END pid=5385 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:05.045855 kernel: audit: type=1106 audit(1768892345.038:768): pid=5385 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:05.038000 audit[5385]: CRED_DISP pid=5385 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:05.047381 systemd[1]: sshd@11-10.0.0.92:22-20.161.92.111:44716.service: Deactivated successfully. Jan 20 06:59:05.051463 kernel: audit: type=1104 audit(1768892345.038:769): pid=5385 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:05.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.92:22-20.161.92.111:44716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:05.051272 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 06:59:05.052301 systemd-logind[1660]: Session 13 logged out. Waiting for processes to exit. Jan 20 06:59:05.055646 systemd-logind[1660]: Removed session 13. Jan 20 06:59:05.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.92:22-20.161.92.111:44728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:05.150170 systemd[1]: Started sshd@12-10.0.0.92:22-20.161.92.111:44728.service - OpenSSH per-connection server daemon (20.161.92.111:44728). Jan 20 06:59:05.667000 audit[5406]: USER_ACCT pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:05.668136 sshd[5406]: Accepted publickey for core from 20.161.92.111 port 44728 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:05.668000 audit[5406]: CRED_ACQ pid=5406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:05.668000 audit[5406]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6f97d230 a2=3 a3=0 items=0 ppid=1 pid=5406 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:05.668000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:05.669809 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:05.679625 systemd-logind[1660]: New session 14 of user core. Jan 20 06:59:05.682396 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 06:59:05.686000 audit[5406]: USER_START pid=5406 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:05.688000 audit[5410]: CRED_ACQ pid=5410 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:06.069148 sshd[5410]: Connection closed by 20.161.92.111 port 44728 Jan 20 06:59:06.069552 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:06.072000 audit[5406]: USER_END pid=5406 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:06.073000 audit[5406]: CRED_DISP pid=5406 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:06.076235 systemd[1]: sshd@12-10.0.0.92:22-20.161.92.111:44728.service: Deactivated successfully. Jan 20 06:59:06.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.92:22-20.161.92.111:44728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:06.081289 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 06:59:06.083282 systemd-logind[1660]: Session 14 logged out. Waiting for processes to exit. Jan 20 06:59:06.084658 systemd-logind[1660]: Removed session 14. Jan 20 06:59:06.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.92:22-20.161.92.111:44744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:06.174955 systemd[1]: Started sshd@13-10.0.0.92:22-20.161.92.111:44744.service - OpenSSH per-connection server daemon (20.161.92.111:44744). Jan 20 06:59:06.697000 audit[5420]: USER_ACCT pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:06.698629 sshd[5420]: Accepted publickey for core from 20.161.92.111 port 44744 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:06.698000 audit[5420]: CRED_ACQ pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:06.698000 audit[5420]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc6c01330 a2=3 a3=0 items=0 ppid=1 pid=5420 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:06.698000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:06.700215 sshd-session[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:06.705335 systemd-logind[1660]: New session 15 of user core. Jan 20 06:59:06.712184 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 06:59:06.714000 audit[5420]: USER_START pid=5420 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:06.716000 audit[5424]: CRED_ACQ pid=5424 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:07.054370 sshd[5424]: Connection closed by 20.161.92.111 port 44744 Jan 20 06:59:07.054214 sshd-session[5420]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:07.055000 audit[5420]: USER_END pid=5420 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:07.055000 audit[5420]: CRED_DISP pid=5420 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:07.058637 systemd[1]: sshd@13-10.0.0.92:22-20.161.92.111:44744.service: Deactivated successfully. Jan 20 06:59:07.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.92:22-20.161.92.111:44744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:07.060407 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 06:59:07.061756 systemd-logind[1660]: Session 15 logged out. Waiting for processes to exit. Jan 20 06:59:07.063240 systemd-logind[1660]: Removed session 15. Jan 20 06:59:07.212432 kubelet[2906]: E0120 06:59:07.211683 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:59:10.211265 kubelet[2906]: E0120 06:59:10.210697 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:59:10.213364 kubelet[2906]: E0120 06:59:10.213282 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:59:12.160914 systemd[1]: Started sshd@14-10.0.0.92:22-20.161.92.111:44760.service - OpenSSH per-connection server daemon (20.161.92.111:44760). Jan 20 06:59:12.166471 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 06:59:12.167424 kernel: audit: type=1130 audit(1768892352.160:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.92:22-20.161.92.111:44760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:12.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.92:22-20.161.92.111:44760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:12.685000 audit[5436]: USER_ACCT pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:12.688012 sshd[5436]: Accepted publickey for core from 20.161.92.111 port 44760 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:12.690843 kernel: audit: type=1101 audit(1768892352.685:790): pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:12.692783 sshd-session[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:12.690000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:12.700713 kernel: audit: type=1103 audit(1768892352.690:791): pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:12.700777 kernel: audit: type=1006 audit(1768892352.691:792): pid=5436 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 06:59:12.691000 audit[5436]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff13bb1700 a2=3 a3=0 items=0 ppid=1 pid=5436 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:12.691000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:12.706098 kernel: audit: type=1300 audit(1768892352.691:792): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff13bb1700 a2=3 a3=0 items=0 ppid=1 pid=5436 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:12.706151 kernel: audit: type=1327 audit(1768892352.691:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:12.713258 systemd-logind[1660]: New session 16 of user core. Jan 20 06:59:12.722025 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 06:59:12.724000 audit[5436]: USER_START pid=5436 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:12.731000 kernel: audit: type=1105 audit(1768892352.724:793): pid=5436 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:12.730000 audit[5440]: CRED_ACQ pid=5440 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:12.734850 kernel: audit: type=1103 audit(1768892352.730:794): pid=5440 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:13.045648 sshd[5440]: Connection closed by 20.161.92.111 port 44760 Jan 20 06:59:13.046972 sshd-session[5436]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:13.048000 audit[5436]: USER_END pid=5436 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:13.054885 kernel: audit: type=1106 audit(1768892353.048:795): pid=5436 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:13.056707 systemd[1]: sshd@14-10.0.0.92:22-20.161.92.111:44760.service: Deactivated successfully. Jan 20 06:59:13.053000 audit[5436]: CRED_DISP pid=5436 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:13.059595 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 06:59:13.061991 systemd-logind[1660]: Session 16 logged out. Waiting for processes to exit. Jan 20 06:59:13.063146 kernel: audit: type=1104 audit(1768892353.053:796): pid=5436 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:13.063391 systemd-logind[1660]: Removed session 16. Jan 20 06:59:13.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.92:22-20.161.92.111:44760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:13.159816 systemd[1]: Started sshd@15-10.0.0.92:22-20.161.92.111:44510.service - OpenSSH per-connection server daemon (20.161.92.111:44510). Jan 20 06:59:13.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.92:22-20.161.92.111:44510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:13.216339 containerd[1680]: time="2026-01-20T06:59:13.216047696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:59:13.571963 containerd[1680]: time="2026-01-20T06:59:13.571391076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:59:13.573258 containerd[1680]: time="2026-01-20T06:59:13.573213918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:59:13.573317 containerd[1680]: time="2026-01-20T06:59:13.573303061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:59:13.573471 kubelet[2906]: E0120 06:59:13.573443 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:59:13.574196 kubelet[2906]: E0120 06:59:13.573801 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:59:13.574310 kubelet[2906]: E0120 06:59:13.574269 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-khrnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8j6t9_calico-system(3f79bfe2-fd9a-4aff-ac23-745eeb4426b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:59:13.575519 kubelet[2906]: E0120 06:59:13.575482 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:59:13.701000 audit[5452]: USER_ACCT pid=5452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:13.702587 sshd[5452]: Accepted publickey for core from 20.161.92.111 port 44510 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:13.703000 audit[5452]: CRED_ACQ pid=5452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:13.703000 audit[5452]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff11762080 a2=3 a3=0 items=0 ppid=1 pid=5452 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:13.703000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:13.704857 sshd-session[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:13.713170 systemd-logind[1660]: New session 17 of user core. Jan 20 06:59:13.720066 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 06:59:13.722000 audit[5452]: USER_START pid=5452 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:13.724000 audit[5456]: CRED_ACQ pid=5456 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:14.210776 containerd[1680]: time="2026-01-20T06:59:14.210514007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:59:14.390546 sshd[5456]: Connection closed by 20.161.92.111 port 44510 Jan 20 06:59:14.391022 sshd-session[5452]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:14.392000 audit[5452]: USER_END pid=5452 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:14.392000 audit[5452]: CRED_DISP pid=5452 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:14.395138 systemd-logind[1660]: Session 17 logged out. Waiting for processes to exit. Jan 20 06:59:14.397185 systemd[1]: sshd@15-10.0.0.92:22-20.161.92.111:44510.service: Deactivated successfully. Jan 20 06:59:14.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.92:22-20.161.92.111:44510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:14.399895 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 06:59:14.402427 systemd-logind[1660]: Removed session 17. Jan 20 06:59:14.497378 systemd[1]: Started sshd@16-10.0.0.92:22-20.161.92.111:44518.service - OpenSSH per-connection server daemon (20.161.92.111:44518). Jan 20 06:59:14.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.92:22-20.161.92.111:44518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:14.551553 containerd[1680]: time="2026-01-20T06:59:14.551495870Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:59:14.553117 containerd[1680]: time="2026-01-20T06:59:14.553007882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:59:14.553117 containerd[1680]: time="2026-01-20T06:59:14.553055949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:59:14.553341 kubelet[2906]: E0120 06:59:14.553231 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:59:14.553341 kubelet[2906]: E0120 06:59:14.553289 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:59:14.553649 kubelet[2906]: E0120 06:59:14.553407 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htbc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-676fb446dd-r6rmf_calico-system(e2f4d2c7-d335-42bb-9262-2a522436304e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:59:14.555475 kubelet[2906]: E0120 06:59:14.555420 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:59:15.189000 audit[5466]: USER_ACCT pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:15.190138 sshd[5466]: Accepted publickey for core from 20.161.92.111 port 44518 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:15.193700 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:15.192000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:15.192000 audit[5466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc82eb260 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:15.192000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:15.197969 systemd-logind[1660]: New session 18 of user core. Jan 20 06:59:15.204261 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 06:59:15.208000 audit[5466]: USER_START pid=5466 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:15.211000 audit[5470]: CRED_ACQ pid=5470 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:15.213627 containerd[1680]: time="2026-01-20T06:59:15.213602444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:59:16.448004 containerd[1680]: time="2026-01-20T06:59:16.447964176Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:59:16.449841 containerd[1680]: time="2026-01-20T06:59:16.449733679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:59:16.449841 containerd[1680]: time="2026-01-20T06:59:16.449811668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:59:16.450026 kubelet[2906]: E0120 06:59:16.449988 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:59:16.451220 kubelet[2906]: E0120 06:59:16.450038 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:59:16.451220 kubelet[2906]: E0120 06:59:16.450158 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597cf6c9f4-c4gmt_calico-apiserver(52f9dd62-5a28-4d34-8a7e-35c040c0ecfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:59:16.452202 kubelet[2906]: E0120 06:59:16.452140 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:59:16.544000 audit[5480]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=5480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:59:16.544000 audit[5480]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe3e4b2610 a2=0 a3=7ffe3e4b25fc items=0 ppid=3052 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:16.544000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:59:16.552000 audit[5480]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:59:16.552000 audit[5480]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe3e4b2610 a2=0 a3=0 items=0 ppid=3052 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:16.552000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:59:16.574000 audit[5482]: NETFILTER_CFG table=filter:143 family=2 entries=38 op=nft_register_rule pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:59:16.574000 audit[5482]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc7dbd1d60 a2=0 a3=7ffc7dbd1d4c items=0 ppid=3052 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:16.574000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:59:16.582000 audit[5482]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:59:16.582000 audit[5482]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc7dbd1d60 a2=0 a3=0 items=0 ppid=3052 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:16.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:59:16.646020 sshd[5470]: Connection closed by 20.161.92.111 port 44518 Jan 20 06:59:16.647783 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:16.648000 audit[5466]: USER_END pid=5466 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:16.648000 audit[5466]: CRED_DISP pid=5466 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:16.651902 systemd[1]: sshd@16-10.0.0.92:22-20.161.92.111:44518.service: Deactivated successfully. Jan 20 06:59:16.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.92:22-20.161.92.111:44518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:16.655273 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 06:59:16.658007 systemd-logind[1660]: Session 18 logged out. Waiting for processes to exit. Jan 20 06:59:16.659421 systemd-logind[1660]: Removed session 18. Jan 20 06:59:16.757121 systemd[1]: Started sshd@17-10.0.0.92:22-20.161.92.111:44532.service - OpenSSH per-connection server daemon (20.161.92.111:44532). Jan 20 06:59:16.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.92:22-20.161.92.111:44532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:17.314136 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 20 06:59:17.314240 kernel: audit: type=1101 audit(1768892357.309:821): pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.309000 audit[5487]: USER_ACCT pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.312322 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:17.314525 sshd[5487]: Accepted publickey for core from 20.161.92.111 port 44532 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:17.310000 audit[5487]: CRED_ACQ pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.322113 systemd-logind[1660]: New session 19 of user core. Jan 20 06:59:17.324614 kernel: audit: type=1103 audit(1768892357.310:822): pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.324665 kernel: audit: type=1006 audit(1768892357.310:823): pid=5487 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 20 06:59:17.310000 audit[5487]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdea0c43b0 a2=3 a3=0 items=0 ppid=1 pid=5487 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:17.327918 kernel: audit: type=1300 audit(1768892357.310:823): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdea0c43b0 a2=3 a3=0 items=0 ppid=1 pid=5487 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:17.310000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:17.331120 kernel: audit: type=1327 audit(1768892357.310:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:17.332010 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 06:59:17.335000 audit[5487]: USER_START pid=5487 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.338000 audit[5491]: CRED_ACQ pid=5491 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.343885 kernel: audit: type=1105 audit(1768892357.335:824): pid=5487 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.344227 kernel: audit: type=1103 audit(1768892357.338:825): pid=5491 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.852885 sshd[5491]: Connection closed by 20.161.92.111 port 44532 Jan 20 06:59:17.851753 sshd-session[5487]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:17.854000 audit[5487]: USER_END pid=5487 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.861068 kernel: audit: type=1106 audit(1768892357.854:826): pid=5487 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.858331 systemd-logind[1660]: Session 19 logged out. Waiting for processes to exit. Jan 20 06:59:17.860404 systemd[1]: sshd@17-10.0.0.92:22-20.161.92.111:44532.service: Deactivated successfully. Jan 20 06:59:17.854000 audit[5487]: CRED_DISP pid=5487 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.867065 kernel: audit: type=1104 audit(1768892357.854:827): pid=5487 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:17.865352 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 06:59:17.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.92:22-20.161.92.111:44532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:17.872987 kernel: audit: type=1131 audit(1768892357.860:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.92:22-20.161.92.111:44532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:17.872353 systemd-logind[1660]: Removed session 19. Jan 20 06:59:17.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.92:22-20.161.92.111:44548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:17.962917 systemd[1]: Started sshd@18-10.0.0.92:22-20.161.92.111:44548.service - OpenSSH per-connection server daemon (20.161.92.111:44548). Jan 20 06:59:18.173469 update_engine[1661]: I20260120 06:59:18.173415 1661 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 06:59:18.173469 update_engine[1661]: I20260120 06:59:18.173470 1661 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 06:59:18.173840 update_engine[1661]: I20260120 06:59:18.173667 1661 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 06:59:18.174100 update_engine[1661]: I20260120 06:59:18.174084 1661 omaha_request_params.cc:62] Current group set to developer Jan 20 06:59:18.187514 update_engine[1661]: I20260120 06:59:18.187034 1661 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 06:59:18.187514 update_engine[1661]: I20260120 06:59:18.187077 1661 update_attempter.cc:643] Scheduling an action processor start. Jan 20 06:59:18.187514 update_engine[1661]: I20260120 06:59:18.187096 1661 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 06:59:18.210522 containerd[1680]: time="2026-01-20T06:59:18.210301330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:59:18.512000 audit[5500]: USER_ACCT pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:18.515000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:18.515000 audit[5500]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7b6f3fd0 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:18.515000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:18.518045 sshd[5500]: Accepted publickey for core from 20.161.92.111 port 44548 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:18.516768 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:18.525698 systemd-logind[1660]: New session 20 of user core. Jan 20 06:59:18.530038 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 06:59:18.533000 audit[5500]: USER_START pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:18.535000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:18.776002 update_engine[1661]: I20260120 06:59:18.775882 1661 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 06:59:18.776002 update_engine[1661]: I20260120 06:59:18.775986 1661 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 06:59:18.776002 update_engine[1661]: I20260120 06:59:18.775995 1661 omaha_request_action.cc:272] Request: Jan 20 06:59:18.776002 update_engine[1661]: Jan 20 06:59:18.776002 update_engine[1661]: Jan 20 06:59:18.776002 update_engine[1661]: Jan 20 06:59:18.776002 update_engine[1661]: Jan 20 06:59:18.776002 update_engine[1661]: Jan 20 06:59:18.776002 update_engine[1661]: Jan 20 06:59:18.776002 update_engine[1661]: Jan 20 06:59:18.776002 update_engine[1661]: Jan 20 06:59:18.776002 update_engine[1661]: I20260120 06:59:18.776000 1661 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 06:59:18.846155 locksmithd[1707]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 06:59:18.944330 update_engine[1661]: I20260120 06:59:18.944275 1661 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 06:59:18.946845 update_engine[1661]: I20260120 06:59:18.944958 1661 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 06:59:18.952310 update_engine[1661]: E20260120 06:59:18.952245 1661 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 06:59:18.952403 update_engine[1661]: I20260120 06:59:18.952332 1661 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 06:59:19.116001 sshd[5504]: Connection closed by 20.161.92.111 port 44548 Jan 20 06:59:19.116433 sshd-session[5500]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:19.118000 audit[5500]: USER_END pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:19.118000 audit[5500]: CRED_DISP pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:19.121887 systemd-logind[1660]: Session 20 logged out. Waiting for processes to exit. Jan 20 06:59:19.122428 systemd[1]: sshd@18-10.0.0.92:22-20.161.92.111:44548.service: Deactivated successfully. Jan 20 06:59:19.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.92:22-20.161.92.111:44548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:19.125467 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 06:59:19.127635 systemd-logind[1660]: Removed session 20. Jan 20 06:59:19.350783 containerd[1680]: time="2026-01-20T06:59:19.350725454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:59:19.909535 containerd[1680]: time="2026-01-20T06:59:19.909452328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:59:19.909535 containerd[1680]: time="2026-01-20T06:59:19.909501054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:59:19.910038 kubelet[2906]: E0120 06:59:19.909812 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:59:19.910038 kubelet[2906]: E0120 06:59:19.909882 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:59:19.910038 kubelet[2906]: E0120 06:59:19.909981 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vggc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:59:19.912998 containerd[1680]: time="2026-01-20T06:59:19.912813032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:59:20.263913 containerd[1680]: time="2026-01-20T06:59:20.263759222Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:59:20.266049 containerd[1680]: time="2026-01-20T06:59:20.265924988Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:59:20.266049 containerd[1680]: time="2026-01-20T06:59:20.266026285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:59:20.266306 kubelet[2906]: E0120 06:59:20.266259 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:59:20.266365 kubelet[2906]: E0120 06:59:20.266310 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:59:20.267126 kubelet[2906]: E0120 06:59:20.266916 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vggc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8w7k_calico-system(506bd27e-3197-4d34-a858-e04017d318df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:59:20.268322 kubelet[2906]: E0120 06:59:20.268298 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:59:21.213039 containerd[1680]: time="2026-01-20T06:59:21.212965945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:59:22.192800 containerd[1680]: time="2026-01-20T06:59:22.192732466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:59:22.194604 containerd[1680]: time="2026-01-20T06:59:22.194509483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:59:22.194604 containerd[1680]: time="2026-01-20T06:59:22.194580391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:59:22.194756 kubelet[2906]: E0120 06:59:22.194707 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:59:22.195447 kubelet[2906]: E0120 06:59:22.194756 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:59:22.195447 kubelet[2906]: E0120 06:59:22.194867 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:037c0c1e8cbe4b15a6588620b1762857,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nkfck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79cc8b6859-6sdc2_calico-system(eb945f54-1cfd-4248-85ea-34b880e5b4b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:59:22.197654 containerd[1680]: time="2026-01-20T06:59:22.197551040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:59:22.533096 containerd[1680]: time="2026-01-20T06:59:22.532345994Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:59:22.677937 containerd[1680]: time="2026-01-20T06:59:22.677792768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:59:22.677937 containerd[1680]: time="2026-01-20T06:59:22.677910442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:59:22.678251 kubelet[2906]: E0120 06:59:22.678216 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:59:22.678469 kubelet[2906]: E0120 06:59:22.678329 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:59:22.678469 kubelet[2906]: E0120 06:59:22.678436 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkfck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79cc8b6859-6sdc2_calico-system(eb945f54-1cfd-4248-85ea-34b880e5b4b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:59:22.679792 kubelet[2906]: E0120 06:59:22.679756 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:59:24.210858 containerd[1680]: time="2026-01-20T06:59:24.210719843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:59:24.228571 systemd[1]: Started sshd@19-10.0.0.92:22-20.161.92.111:56872.service - OpenSSH per-connection server daemon (20.161.92.111:56872). Jan 20 06:59:24.232167 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 20 06:59:24.232194 kernel: audit: type=1130 audit(1768892364.227:838): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.92:22-20.161.92.111:56872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:24.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.92:22-20.161.92.111:56872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:24.560355 containerd[1680]: time="2026-01-20T06:59:24.560113758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:59:24.561840 containerd[1680]: time="2026-01-20T06:59:24.561760474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:59:24.561936 containerd[1680]: time="2026-01-20T06:59:24.561885460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:59:24.562660 kubelet[2906]: E0120 06:59:24.562621 2906 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:59:24.562935 kubelet[2906]: E0120 06:59:24.562669 2906 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:59:24.562935 kubelet[2906]: E0120 06:59:24.562784 2906 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkgj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-597cf6c9f4-jx5cs_calico-apiserver(de3eadd9-d35e-43b1-acf5-88fe04381bf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:59:24.564048 kubelet[2906]: E0120 06:59:24.564022 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:59:24.769000 audit[5555]: USER_ACCT pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:24.770330 sshd[5555]: Accepted publickey for core from 20.161.92.111 port 56872 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:24.774000 audit[5555]: CRED_ACQ pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:24.775804 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:24.776540 kernel: audit: type=1101 audit(1768892364.769:839): pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:24.776581 kernel: audit: type=1103 audit(1768892364.774:840): pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:24.780932 kernel: audit: type=1006 audit(1768892364.774:841): pid=5555 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 20 06:59:24.774000 audit[5555]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa7fa0af0 a2=3 a3=0 items=0 ppid=1 pid=5555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:24.782114 systemd-logind[1660]: New session 21 of user core. Jan 20 06:59:24.784657 kernel: audit: type=1300 audit(1768892364.774:841): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa7fa0af0 a2=3 a3=0 items=0 ppid=1 pid=5555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:24.774000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:24.787989 kernel: audit: type=1327 audit(1768892364.774:841): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:24.789414 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 06:59:24.793000 audit[5555]: USER_START pid=5555 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:24.804432 kernel: audit: type=1105 audit(1768892364.793:842): pid=5555 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:24.804517 kernel: audit: type=1103 audit(1768892364.799:843): pid=5559 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:24.799000 audit[5559]: CRED_ACQ pid=5559 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:25.153266 sshd[5559]: Connection closed by 20.161.92.111 port 56872 Jan 20 06:59:25.154027 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:25.155000 audit[5555]: USER_END pid=5555 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:25.159190 systemd[1]: sshd@19-10.0.0.92:22-20.161.92.111:56872.service: Deactivated successfully. Jan 20 06:59:25.161743 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 06:59:25.162198 kernel: audit: type=1106 audit(1768892365.155:844): pid=5555 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:25.165064 systemd-logind[1660]: Session 21 logged out. Waiting for processes to exit. Jan 20 06:59:25.170847 kernel: audit: type=1104 audit(1768892365.155:845): pid=5555 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:25.155000 audit[5555]: CRED_DISP pid=5555 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:25.171342 systemd-logind[1660]: Removed session 21. Jan 20 06:59:25.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.92:22-20.161.92.111:56872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:25.346000 audit[5571]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:59:25.346000 audit[5571]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc7ab09140 a2=0 a3=7ffc7ab0912c items=0 ppid=3052 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:25.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:59:25.351000 audit[5571]: NETFILTER_CFG table=nat:146 family=2 entries=104 op=nft_register_chain pid=5571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:59:25.351000 audit[5571]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc7ab09140 a2=0 a3=7ffc7ab0912c items=0 ppid=3052 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:25.351000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:59:28.217972 kubelet[2906]: E0120 06:59:28.217917 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:59:28.218378 kubelet[2906]: E0120 06:59:28.217989 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:59:29.177377 update_engine[1661]: I20260120 06:59:29.176916 1661 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 06:59:29.177377 update_engine[1661]: I20260120 06:59:29.177011 1661 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 06:59:29.177377 update_engine[1661]: I20260120 06:59:29.177338 1661 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 06:59:29.184254 update_engine[1661]: E20260120 06:59:29.184145 1661 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 06:59:29.184254 update_engine[1661]: I20260120 06:59:29.184229 1661 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 06:59:30.268670 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 06:59:30.268750 kernel: audit: type=1130 audit(1768892370.262:849): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.92:22-20.161.92.111:56886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:30.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.92:22-20.161.92.111:56886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:30.263072 systemd[1]: Started sshd@20-10.0.0.92:22-20.161.92.111:56886.service - OpenSSH per-connection server daemon (20.161.92.111:56886). Jan 20 06:59:30.802000 audit[5582]: USER_ACCT pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:30.805951 sshd[5582]: Accepted publickey for core from 20.161.92.111 port 56886 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:30.806000 audit[5582]: CRED_ACQ pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:30.808522 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:30.810011 kernel: audit: type=1101 audit(1768892370.802:850): pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:30.810070 kernel: audit: type=1103 audit(1768892370.806:851): pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:30.813701 kernel: audit: type=1006 audit(1768892370.806:852): pid=5582 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 06:59:30.806000 audit[5582]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcff8bab10 a2=3 a3=0 items=0 ppid=1 pid=5582 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:30.817676 kernel: audit: type=1300 audit(1768892370.806:852): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcff8bab10 a2=3 a3=0 items=0 ppid=1 pid=5582 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:30.817848 systemd-logind[1660]: New session 22 of user core. Jan 20 06:59:30.806000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:30.821681 kernel: audit: type=1327 audit(1768892370.806:852): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:30.824055 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 06:59:30.826000 audit[5582]: USER_START pid=5582 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:30.832847 kernel: audit: type=1105 audit(1768892370.826:853): pid=5582 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:30.832937 kernel: audit: type=1103 audit(1768892370.829:854): pid=5586 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:30.829000 audit[5586]: CRED_ACQ pid=5586 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:31.166852 sshd[5586]: Connection closed by 20.161.92.111 port 56886 Jan 20 06:59:31.189487 kernel: audit: type=1106 audit(1768892371.170:855): pid=5582 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:31.189541 kernel: audit: type=1104 audit(1768892371.170:856): pid=5582 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:31.170000 audit[5582]: USER_END pid=5582 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:31.170000 audit[5582]: CRED_DISP pid=5582 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:31.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.92:22-20.161.92.111:56886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:31.167428 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:31.179371 systemd-logind[1660]: Session 22 logged out. Waiting for processes to exit. Jan 20 06:59:31.180289 systemd[1]: sshd@20-10.0.0.92:22-20.161.92.111:56886.service: Deactivated successfully. Jan 20 06:59:31.182889 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 06:59:31.186124 systemd-logind[1660]: Removed session 22. Jan 20 06:59:31.213852 kubelet[2906]: E0120 06:59:31.213133 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:59:31.214927 kubelet[2906]: E0120 06:59:31.214878 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:59:36.211665 kubelet[2906]: E0120 06:59:36.211330 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:59:36.211665 kubelet[2906]: E0120 06:59:36.211638 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:59:36.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.92:22-20.161.92.111:54286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:36.274063 systemd[1]: Started sshd@21-10.0.0.92:22-20.161.92.111:54286.service - OpenSSH per-connection server daemon (20.161.92.111:54286). Jan 20 06:59:36.275161 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:59:36.275204 kernel: audit: type=1130 audit(1768892376.273:858): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.92:22-20.161.92.111:54286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:36.795000 audit[5598]: USER_ACCT pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:36.798843 sshd[5598]: Accepted publickey for core from 20.161.92.111 port 54286 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:36.799597 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:36.795000 audit[5598]: CRED_ACQ pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:36.804089 kernel: audit: type=1101 audit(1768892376.795:859): pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:36.804145 kernel: audit: type=1103 audit(1768892376.795:860): pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:36.811551 systemd-logind[1660]: New session 23 of user core. Jan 20 06:59:36.795000 audit[5598]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe67681120 a2=3 a3=0 items=0 ppid=1 pid=5598 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:36.816297 kernel: audit: type=1006 audit(1768892376.795:861): pid=5598 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 20 06:59:36.816519 kernel: audit: type=1300 audit(1768892376.795:861): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe67681120 a2=3 a3=0 items=0 ppid=1 pid=5598 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:36.814165 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 06:59:36.795000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:36.818330 kernel: audit: type=1327 audit(1768892376.795:861): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:36.821000 audit[5598]: USER_START pid=5598 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:36.827878 kernel: audit: type=1105 audit(1768892376.821:862): pid=5598 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:36.821000 audit[5608]: CRED_ACQ pid=5608 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:36.834869 kernel: audit: type=1103 audit(1768892376.821:863): pid=5608 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:37.144815 sshd[5608]: Connection closed by 20.161.92.111 port 54286 Jan 20 06:59:37.145973 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:37.146000 audit[5598]: USER_END pid=5598 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:37.149397 systemd-logind[1660]: Session 23 logged out. Waiting for processes to exit. Jan 20 06:59:37.152856 kernel: audit: type=1106 audit(1768892377.146:864): pid=5598 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:37.151113 systemd[1]: sshd@21-10.0.0.92:22-20.161.92.111:54286.service: Deactivated successfully. Jan 20 06:59:37.146000 audit[5598]: CRED_DISP pid=5598 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:37.156873 kernel: audit: type=1104 audit(1768892377.146:865): pid=5598 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:37.153559 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 06:59:37.155623 systemd-logind[1660]: Removed session 23. Jan 20 06:59:37.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.92:22-20.161.92.111:54286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:39.175057 update_engine[1661]: I20260120 06:59:39.174958 1661 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 06:59:39.175449 update_engine[1661]: I20260120 06:59:39.175071 1661 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 06:59:39.175647 update_engine[1661]: I20260120 06:59:39.175578 1661 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 06:59:39.182784 update_engine[1661]: E20260120 06:59:39.182735 1661 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 06:59:39.182886 update_engine[1661]: I20260120 06:59:39.182819 1661 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 06:59:39.211070 kubelet[2906]: E0120 06:59:39.211025 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:59:42.211071 kubelet[2906]: E0120 06:59:42.211005 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:59:42.259883 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:59:42.259968 kernel: audit: type=1130 audit(1768892382.253:867): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.92:22-20.161.92.111:54288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:42.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.92:22-20.161.92.111:54288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:42.253945 systemd[1]: Started sshd@22-10.0.0.92:22-20.161.92.111:54288.service - OpenSSH per-connection server daemon (20.161.92.111:54288). Jan 20 06:59:42.781000 audit[5622]: USER_ACCT pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:42.787794 sshd[5622]: Accepted publickey for core from 20.161.92.111 port 54288 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:42.788139 kernel: audit: type=1101 audit(1768892382.781:868): pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:42.787000 audit[5622]: CRED_ACQ pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:42.790563 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:42.794475 kernel: audit: type=1103 audit(1768892382.787:869): pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:42.799907 kernel: audit: type=1006 audit(1768892382.787:870): pid=5622 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 06:59:42.787000 audit[5622]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd1d4ff40 a2=3 a3=0 items=0 ppid=1 pid=5622 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:42.807117 kernel: audit: type=1300 audit(1768892382.787:870): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd1d4ff40 a2=3 a3=0 items=0 ppid=1 pid=5622 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:42.805903 systemd-logind[1660]: New session 24 of user core. Jan 20 06:59:42.787000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:42.809935 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 06:59:42.810842 kernel: audit: type=1327 audit(1768892382.787:870): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:42.814000 audit[5622]: USER_START pid=5622 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:42.820846 kernel: audit: type=1105 audit(1768892382.814:871): pid=5622 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:42.819000 audit[5626]: CRED_ACQ pid=5626 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:42.826921 kernel: audit: type=1103 audit(1768892382.819:872): pid=5626 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:43.160178 sshd[5626]: Connection closed by 20.161.92.111 port 54288 Jan 20 06:59:43.158982 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:43.159000 audit[5622]: USER_END pid=5622 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:43.165925 kernel: audit: type=1106 audit(1768892383.159:873): pid=5622 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:43.159000 audit[5622]: CRED_DISP pid=5622 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:43.169858 kernel: audit: type=1104 audit(1768892383.159:874): pid=5622 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:43.169142 systemd-logind[1660]: Session 24 logged out. Waiting for processes to exit. Jan 20 06:59:43.169603 systemd[1]: sshd@22-10.0.0.92:22-20.161.92.111:54288.service: Deactivated successfully. Jan 20 06:59:43.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.92:22-20.161.92.111:54288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:43.174141 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 06:59:43.176903 systemd-logind[1660]: Removed session 24. Jan 20 06:59:43.212763 kubelet[2906]: E0120 06:59:43.212728 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:59:44.210175 kubelet[2906]: E0120 06:59:44.209953 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 06:59:48.266413 systemd[1]: Started sshd@23-10.0.0.92:22-20.161.92.111:34654.service - OpenSSH per-connection server daemon (20.161.92.111:34654). Jan 20 06:59:48.271368 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:59:48.271499 kernel: audit: type=1130 audit(1768892388.266:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.92:22-20.161.92.111:34654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:48.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.92:22-20.161.92.111:34654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:48.789000 audit[5637]: USER_ACCT pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:48.791920 sshd[5637]: Accepted publickey for core from 20.161.92.111 port 34654 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:48.793200 sshd-session[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:48.790000 audit[5637]: CRED_ACQ pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:48.797109 kernel: audit: type=1101 audit(1768892388.789:877): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:48.797151 kernel: audit: type=1103 audit(1768892388.790:878): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:48.804676 systemd-logind[1660]: New session 25 of user core. Jan 20 06:59:48.805263 kernel: audit: type=1006 audit(1768892388.790:879): pid=5637 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 20 06:59:48.790000 audit[5637]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9e28d0f0 a2=3 a3=0 items=0 ppid=1 pid=5637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:48.810893 kernel: audit: type=1300 audit(1768892388.790:879): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9e28d0f0 a2=3 a3=0 items=0 ppid=1 pid=5637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:48.790000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:48.814149 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 06:59:48.815851 kernel: audit: type=1327 audit(1768892388.790:879): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:48.818000 audit[5637]: USER_START pid=5637 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:48.823000 audit[5641]: CRED_ACQ pid=5641 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:48.825977 kernel: audit: type=1105 audit(1768892388.818:880): pid=5637 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:48.826038 kernel: audit: type=1103 audit(1768892388.823:881): pid=5641 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:49.170159 sshd[5641]: Connection closed by 20.161.92.111 port 34654 Jan 20 06:59:49.172051 sshd-session[5637]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:49.173294 update_engine[1661]: I20260120 06:59:49.172869 1661 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 06:59:49.173294 update_engine[1661]: I20260120 06:59:49.172927 1661 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 06:59:49.173294 update_engine[1661]: I20260120 06:59:49.173211 1661 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 06:59:49.173000 audit[5637]: USER_END pid=5637 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:49.175988 systemd[1]: sshd@23-10.0.0.92:22-20.161.92.111:34654.service: Deactivated successfully. Jan 20 06:59:49.181240 kernel: audit: type=1106 audit(1768892389.173:882): pid=5637 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:49.181304 update_engine[1661]: E20260120 06:59:49.180601 1661 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 06:59:49.181304 update_engine[1661]: I20260120 06:59:49.180671 1661 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 06:59:49.181304 update_engine[1661]: I20260120 06:59:49.180678 1661 omaha_request_action.cc:617] Omaha request response: Jan 20 06:59:49.181304 update_engine[1661]: E20260120 06:59:49.180739 1661 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 06:59:49.181304 update_engine[1661]: I20260120 06:59:49.180773 1661 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 06:59:49.181304 update_engine[1661]: I20260120 06:59:49.180778 1661 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 06:59:49.181304 update_engine[1661]: I20260120 06:59:49.180783 1661 update_attempter.cc:306] Processing Done. Jan 20 06:59:49.181304 update_engine[1661]: E20260120 06:59:49.180794 1661 update_attempter.cc:619] Update failed. Jan 20 06:59:49.181304 update_engine[1661]: I20260120 06:59:49.180799 1661 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 06:59:49.181304 update_engine[1661]: I20260120 06:59:49.180804 1661 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 06:59:49.181304 update_engine[1661]: I20260120 06:59:49.180810 1661 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 06:59:49.181304 update_engine[1661]: I20260120 06:59:49.180878 1661 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 06:59:49.181304 update_engine[1661]: I20260120 06:59:49.180902 1661 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 06:59:49.181304 update_engine[1661]: I20260120 06:59:49.180908 1661 omaha_request_action.cc:272] Request: Jan 20 06:59:49.181304 update_engine[1661]: Jan 20 06:59:49.181304 update_engine[1661]: Jan 20 06:59:49.181678 update_engine[1661]: Jan 20 06:59:49.181678 update_engine[1661]: Jan 20 06:59:49.181678 update_engine[1661]: Jan 20 06:59:49.181678 update_engine[1661]: Jan 20 06:59:49.181678 update_engine[1661]: I20260120 06:59:49.180913 1661 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 06:59:49.181678 update_engine[1661]: I20260120 06:59:49.180931 1661 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 06:59:49.181678 update_engine[1661]: I20260120 06:59:49.181172 1661 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 06:59:49.173000 audit[5637]: CRED_DISP pid=5637 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:49.182956 locksmithd[1707]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 06:59:49.182308 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 06:59:49.185845 kernel: audit: type=1104 audit(1768892389.173:883): pid=5637 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:49.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.92:22-20.161.92.111:34654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:49.185943 systemd-logind[1660]: Session 25 logged out. Waiting for processes to exit. Jan 20 06:59:49.187153 systemd-logind[1660]: Removed session 25. Jan 20 06:59:49.188840 update_engine[1661]: E20260120 06:59:49.187659 1661 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 06:59:49.188840 update_engine[1661]: I20260120 06:59:49.187723 1661 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 06:59:49.188840 update_engine[1661]: I20260120 06:59:49.187731 1661 omaha_request_action.cc:617] Omaha request response: Jan 20 06:59:49.188840 update_engine[1661]: I20260120 06:59:49.187737 1661 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 06:59:49.188840 update_engine[1661]: I20260120 06:59:49.187741 1661 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 06:59:49.188840 update_engine[1661]: I20260120 06:59:49.187747 1661 update_attempter.cc:306] Processing Done. Jan 20 06:59:49.188840 update_engine[1661]: I20260120 06:59:49.187752 1661 update_attempter.cc:310] Error event sent. Jan 20 06:59:49.188840 update_engine[1661]: I20260120 06:59:49.187762 1661 update_check_scheduler.cc:74] Next update check in 48m48s Jan 20 06:59:49.189039 locksmithd[1707]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 06:59:49.212849 kubelet[2906]: E0120 06:59:49.212278 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 06:59:49.214617 kubelet[2906]: E0120 06:59:49.214127 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 06:59:54.210171 kubelet[2906]: E0120 06:59:54.209991 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 06:59:54.211710 kubelet[2906]: E0120 06:59:54.210308 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 06:59:54.211710 kubelet[2906]: E0120 06:59:54.210361 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 06:59:54.277540 systemd[1]: Started sshd@24-10.0.0.92:22-20.161.92.111:35068.service - OpenSSH per-connection server daemon (20.161.92.111:35068). Jan 20 06:59:54.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.92:22-20.161.92.111:35068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:54.279447 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:59:54.279513 kernel: audit: type=1130 audit(1768892394.277:885): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.92:22-20.161.92.111:35068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:55.548000 audit[5681]: USER_ACCT pid=5681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.549418 sshd[5681]: Accepted publickey for core from 20.161.92.111 port 35068 ssh2: RSA SHA256:UDm463BktspdUhFct6pWVE5/p7Ujsni8bU+0sy/aUGE Jan 20 06:59:55.553985 kernel: audit: type=1101 audit(1768892395.548:886): pid=5681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.554907 sshd-session[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:59:55.553000 audit[5681]: CRED_ACQ pid=5681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.559841 kernel: audit: type=1103 audit(1768892395.553:887): pid=5681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.562871 kernel: audit: type=1006 audit(1768892395.553:888): pid=5681 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 20 06:59:55.553000 audit[5681]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd2d0e0b0 a2=3 a3=0 items=0 ppid=1 pid=5681 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:55.567868 kernel: audit: type=1300 audit(1768892395.553:888): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd2d0e0b0 a2=3 a3=0 items=0 ppid=1 pid=5681 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:59:55.553000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:55.569846 kernel: audit: type=1327 audit(1768892395.553:888): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:59:55.571698 systemd-logind[1660]: New session 26 of user core. Jan 20 06:59:55.578038 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 06:59:55.580000 audit[5681]: USER_START pid=5681 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.586985 kernel: audit: type=1105 audit(1768892395.580:889): pid=5681 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.586000 audit[5685]: CRED_ACQ pid=5685 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.590947 kernel: audit: type=1103 audit(1768892395.586:890): pid=5685 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.918891 sshd[5685]: Connection closed by 20.161.92.111 port 35068 Jan 20 06:59:55.919393 sshd-session[5681]: pam_unix(sshd:session): session closed for user core Jan 20 06:59:55.927304 kernel: audit: type=1106 audit(1768892395.920:891): pid=5681 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.927409 kernel: audit: type=1104 audit(1768892395.920:892): pid=5681 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.920000 audit[5681]: USER_END pid=5681 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.920000 audit[5681]: CRED_DISP pid=5681 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 06:59:55.930757 systemd[1]: sshd@24-10.0.0.92:22-20.161.92.111:35068.service: Deactivated successfully. Jan 20 06:59:55.932469 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 06:59:55.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.92:22-20.161.92.111:35068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:59:55.935679 systemd-logind[1660]: Session 26 logged out. Waiting for processes to exit. Jan 20 06:59:55.937032 systemd-logind[1660]: Removed session 26. Jan 20 06:59:58.210399 kubelet[2906]: E0120 06:59:58.210355 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 07:00:00.654052 systemd[1786]: Created slice background.slice - User Background Tasks Slice. Jan 20 07:00:00.656123 systemd[1786]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 20 07:00:00.675140 systemd[1786]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 20 07:00:01.211488 kubelet[2906]: E0120 07:00:01.211151 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 07:00:02.211245 kubelet[2906]: E0120 07:00:02.211172 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 07:00:05.214176 kubelet[2906]: E0120 07:00:05.213972 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 07:00:06.210430 kubelet[2906]: E0120 07:00:06.210381 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 07:00:07.211062 kubelet[2906]: E0120 07:00:07.210710 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 07:00:11.211941 kubelet[2906]: E0120 07:00:11.211747 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 07:00:14.702788 containerd[1680]: time="2026-01-20T07:00:14.702657348Z" level=info msg="container event discarded" container=86d8903cf485c6beaa6c4d6652a5fcf78c6abcd5bbc4f65f919a45d67157b0e4 type=CONTAINER_CREATED_EVENT Jan 20 07:00:14.702788 containerd[1680]: time="2026-01-20T07:00:14.702754825Z" level=info msg="container event discarded" container=86d8903cf485c6beaa6c4d6652a5fcf78c6abcd5bbc4f65f919a45d67157b0e4 type=CONTAINER_STARTED_EVENT Jan 20 07:00:14.725037 containerd[1680]: time="2026-01-20T07:00:14.724976839Z" level=info msg="container event discarded" container=26a31c3b2d749f94baee5653edaf86c0efccb33137a9879864c9ea26dd07923f type=CONTAINER_CREATED_EVENT Jan 20 07:00:14.725037 containerd[1680]: time="2026-01-20T07:00:14.725030762Z" level=info msg="container event discarded" container=26a31c3b2d749f94baee5653edaf86c0efccb33137a9879864c9ea26dd07923f type=CONTAINER_STARTED_EVENT Jan 20 07:00:14.743316 containerd[1680]: time="2026-01-20T07:00:14.743211111Z" level=info msg="container event discarded" container=e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79 type=CONTAINER_CREATED_EVENT Jan 20 07:00:14.743316 containerd[1680]: time="2026-01-20T07:00:14.743255297Z" level=info msg="container event discarded" container=08f8aabea7f5a8ca1c5f510d7a3e97b98d1996dd071d44debc2e0b5f63485364 type=CONTAINER_CREATED_EVENT Jan 20 07:00:14.743316 containerd[1680]: time="2026-01-20T07:00:14.743264272Z" level=info msg="container event discarded" container=1f63fb49eabf11cd16383ba7c03d73b6af7b0ece4bbfceed58825783efa9a29e type=CONTAINER_CREATED_EVENT Jan 20 07:00:14.743316 containerd[1680]: time="2026-01-20T07:00:14.743270576Z" level=info msg="container event discarded" container=1f63fb49eabf11cd16383ba7c03d73b6af7b0ece4bbfceed58825783efa9a29e type=CONTAINER_STARTED_EVENT Jan 20 07:00:14.783425 containerd[1680]: time="2026-01-20T07:00:14.783375832Z" level=info msg="container event discarded" container=b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75 type=CONTAINER_CREATED_EVENT Jan 20 07:00:14.853702 containerd[1680]: time="2026-01-20T07:00:14.853625612Z" level=info msg="container event discarded" container=e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79 type=CONTAINER_STARTED_EVENT Jan 20 07:00:14.889980 containerd[1680]: time="2026-01-20T07:00:14.889877344Z" level=info msg="container event discarded" container=08f8aabea7f5a8ca1c5f510d7a3e97b98d1996dd071d44debc2e0b5f63485364 type=CONTAINER_STARTED_EVENT Jan 20 07:00:14.902377 containerd[1680]: time="2026-01-20T07:00:14.902298900Z" level=info msg="container event discarded" container=b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75 type=CONTAINER_STARTED_EVENT Jan 20 07:00:15.212505 kubelet[2906]: E0120 07:00:15.212473 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 07:00:16.210468 kubelet[2906]: E0120 07:00:16.210429 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 07:00:17.210881 kubelet[2906]: E0120 07:00:17.210495 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 07:00:19.211193 kubelet[2906]: E0120 07:00:19.210996 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 07:00:19.211193 kubelet[2906]: E0120 07:00:19.211093 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 07:00:20.080621 systemd[1]: cri-containerd-e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79.scope: Deactivated successfully. Jan 20 07:00:20.081064 systemd[1]: cri-containerd-e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79.scope: Consumed 4.298s CPU time, 60.7M memory peak, 256K read from disk. Jan 20 07:00:20.083404 containerd[1680]: time="2026-01-20T07:00:20.083365855Z" level=info msg="received container exit event container_id:\"e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79\" id:\"e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79\" pid:2720 exit_status:1 exited_at:{seconds:1768892420 nanos:82994880}" Jan 20 07:00:20.080000 audit: BPF prog-id=256 op=LOAD Jan 20 07:00:20.084414 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 07:00:20.084551 kernel: audit: type=1334 audit(1768892420.080:894): prog-id=256 op=LOAD Jan 20 07:00:20.080000 audit: BPF prog-id=83 op=UNLOAD Jan 20 07:00:20.090851 kernel: audit: type=1334 audit(1768892420.080:895): prog-id=83 op=UNLOAD Jan 20 07:00:20.086000 audit: BPF prog-id=98 op=UNLOAD Jan 20 07:00:20.086000 audit: BPF prog-id=102 op=UNLOAD Jan 20 07:00:20.093448 kernel: audit: type=1334 audit(1768892420.086:896): prog-id=98 op=UNLOAD Jan 20 07:00:20.093500 kernel: audit: type=1334 audit(1768892420.086:897): prog-id=102 op=UNLOAD Jan 20 07:00:20.111412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79-rootfs.mount: Deactivated successfully. Jan 20 07:00:20.496142 systemd[1]: cri-containerd-d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b.scope: Deactivated successfully. Jan 20 07:00:20.496417 systemd[1]: cri-containerd-d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b.scope: Consumed 31.197s CPU time, 101.6M memory peak. Jan 20 07:00:20.499834 containerd[1680]: time="2026-01-20T07:00:20.499601650Z" level=info msg="received container exit event container_id:\"d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b\" id:\"d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b\" pid:3225 exit_status:1 exited_at:{seconds:1768892420 nanos:498482700}" Jan 20 07:00:20.499000 audit: BPF prog-id=146 op=UNLOAD Jan 20 07:00:20.503377 kubelet[2906]: E0120 07:00:20.503346 2906 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.92:51354->10.0.0.94:2379: read: connection timed out" Jan 20 07:00:20.504620 kernel: audit: type=1334 audit(1768892420.499:898): prog-id=146 op=UNLOAD Jan 20 07:00:20.504659 kernel: audit: type=1334 audit(1768892420.499:899): prog-id=150 op=UNLOAD Jan 20 07:00:20.499000 audit: BPF prog-id=150 op=UNLOAD Jan 20 07:00:20.507146 systemd[1]: cri-containerd-b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75.scope: Deactivated successfully. Jan 20 07:00:20.507474 systemd[1]: cri-containerd-b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75.scope: Consumed 3.039s CPU time, 23M memory peak, 256K read from disk. Jan 20 07:00:20.506000 audit: BPF prog-id=257 op=LOAD Jan 20 07:00:20.506000 audit: BPF prog-id=93 op=UNLOAD Jan 20 07:00:20.510375 kernel: audit: type=1334 audit(1768892420.506:900): prog-id=257 op=LOAD Jan 20 07:00:20.510434 kernel: audit: type=1334 audit(1768892420.506:901): prog-id=93 op=UNLOAD Jan 20 07:00:20.511372 containerd[1680]: time="2026-01-20T07:00:20.511280559Z" level=info msg="received container exit event container_id:\"b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75\" id:\"b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75\" pid:2749 exit_status:1 exited_at:{seconds:1768892420 nanos:510979720}" Jan 20 07:00:20.509000 audit: BPF prog-id=108 op=UNLOAD Jan 20 07:00:20.512157 kernel: audit: type=1334 audit(1768892420.509:902): prog-id=108 op=UNLOAD Jan 20 07:00:20.509000 audit: BPF prog-id=112 op=UNLOAD Jan 20 07:00:20.514844 kernel: audit: type=1334 audit(1768892420.509:903): prog-id=112 op=UNLOAD Jan 20 07:00:20.537254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b-rootfs.mount: Deactivated successfully. Jan 20 07:00:20.548995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75-rootfs.mount: Deactivated successfully. Jan 20 07:00:20.942252 kubelet[2906]: I0120 07:00:20.942136 2906 scope.go:117] "RemoveContainer" containerID="b7e90adad330a93bd08104a54420df01e09bb1dd8423c1c3d359c8b2acbfcc75" Jan 20 07:00:20.945708 kubelet[2906]: I0120 07:00:20.945608 2906 scope.go:117] "RemoveContainer" containerID="e620c2138f6d536409d5f4659e87e8a8e696137ff166ab20fed93518fc719e79" Jan 20 07:00:20.948349 kubelet[2906]: I0120 07:00:20.948267 2906 scope.go:117] "RemoveContainer" containerID="d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b" Jan 20 07:00:20.948890 containerd[1680]: time="2026-01-20T07:00:20.948862263Z" level=info msg="CreateContainer within sandbox \"1f63fb49eabf11cd16383ba7c03d73b6af7b0ece4bbfceed58825783efa9a29e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 07:00:20.965237 containerd[1680]: time="2026-01-20T07:00:20.965193266Z" level=info msg="CreateContainer within sandbox \"86d8903cf485c6beaa6c4d6652a5fcf78c6abcd5bbc4f65f919a45d67157b0e4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 07:00:20.966975 containerd[1680]: time="2026-01-20T07:00:20.966936069Z" level=info msg="CreateContainer within sandbox \"27339f3b8e78a1e8399a16e975022504f4d4ac36510d4e4116b0d2a4899911f6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 20 07:00:20.985839 containerd[1680]: time="2026-01-20T07:00:20.982616452Z" level=info msg="Container 81d85633563275c552f9ad0b450be70da35916f36f2ac4cbf1d363fda8c44e57: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:00:20.992617 containerd[1680]: time="2026-01-20T07:00:20.992375292Z" level=info msg="Container 72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:00:21.003479 containerd[1680]: time="2026-01-20T07:00:21.003420573Z" level=info msg="CreateContainer within sandbox \"1f63fb49eabf11cd16383ba7c03d73b6af7b0ece4bbfceed58825783efa9a29e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"81d85633563275c552f9ad0b450be70da35916f36f2ac4cbf1d363fda8c44e57\"" Jan 20 07:00:21.005399 containerd[1680]: time="2026-01-20T07:00:21.004363387Z" level=info msg="Container 269e4534db5803ff94770f557e75fbf799e913ffdd2b6cb31dd5b0959ed01aee: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:00:21.007565 containerd[1680]: time="2026-01-20T07:00:21.007541258Z" level=info msg="StartContainer for \"81d85633563275c552f9ad0b450be70da35916f36f2ac4cbf1d363fda8c44e57\"" Jan 20 07:00:21.008577 containerd[1680]: time="2026-01-20T07:00:21.008410946Z" level=info msg="connecting to shim 81d85633563275c552f9ad0b450be70da35916f36f2ac4cbf1d363fda8c44e57" address="unix:///run/containerd/s/9386420be80b30d3545efc1dfd467779ec41836332e34b253974c021ed2f13c1" protocol=ttrpc version=3 Jan 20 07:00:21.014868 containerd[1680]: time="2026-01-20T07:00:21.014796652Z" level=info msg="CreateContainer within sandbox \"27339f3b8e78a1e8399a16e975022504f4d4ac36510d4e4116b0d2a4899911f6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7\"" Jan 20 07:00:21.015498 containerd[1680]: time="2026-01-20T07:00:21.015465885Z" level=info msg="StartContainer for \"72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7\"" Jan 20 07:00:21.017056 containerd[1680]: time="2026-01-20T07:00:21.016471552Z" level=info msg="connecting to shim 72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7" address="unix:///run/containerd/s/edf7739ddb6af22edc05d8812341e154e0c0be971e12c54bf4dcb548a0559dae" protocol=ttrpc version=3 Jan 20 07:00:21.023615 containerd[1680]: time="2026-01-20T07:00:21.023561496Z" level=info msg="CreateContainer within sandbox \"86d8903cf485c6beaa6c4d6652a5fcf78c6abcd5bbc4f65f919a45d67157b0e4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"269e4534db5803ff94770f557e75fbf799e913ffdd2b6cb31dd5b0959ed01aee\"" Jan 20 07:00:21.025520 containerd[1680]: time="2026-01-20T07:00:21.024919165Z" level=info msg="StartContainer for \"269e4534db5803ff94770f557e75fbf799e913ffdd2b6cb31dd5b0959ed01aee\"" Jan 20 07:00:21.026521 containerd[1680]: time="2026-01-20T07:00:21.026454068Z" level=info msg="connecting to shim 269e4534db5803ff94770f557e75fbf799e913ffdd2b6cb31dd5b0959ed01aee" address="unix:///run/containerd/s/2b77d8868af5f020fe4ef9eec8c7e3cc344c0deff8bbb4d51f8fcbb534e0f176" protocol=ttrpc version=3 Jan 20 07:00:21.037030 systemd[1]: Started cri-containerd-81d85633563275c552f9ad0b450be70da35916f36f2ac4cbf1d363fda8c44e57.scope - libcontainer container 81d85633563275c552f9ad0b450be70da35916f36f2ac4cbf1d363fda8c44e57. Jan 20 07:00:21.048172 systemd[1]: Started cri-containerd-72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7.scope - libcontainer container 72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7. Jan 20 07:00:21.059000 audit: BPF prog-id=258 op=LOAD Jan 20 07:00:21.066004 systemd[1]: Started cri-containerd-269e4534db5803ff94770f557e75fbf799e913ffdd2b6cb31dd5b0959ed01aee.scope - libcontainer container 269e4534db5803ff94770f557e75fbf799e913ffdd2b6cb31dd5b0959ed01aee. Jan 20 07:00:21.064000 audit: BPF prog-id=259 op=LOAD Jan 20 07:00:21.064000 audit[5764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2611 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643835363333353633323735633535326639616430623435306265 Jan 20 07:00:21.065000 audit: BPF prog-id=259 op=UNLOAD Jan 20 07:00:21.065000 audit[5764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643835363333353633323735633535326639616430623435306265 Jan 20 07:00:21.067000 audit: BPF prog-id=260 op=LOAD Jan 20 07:00:21.067000 audit[5764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2611 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643835363333353633323735633535326639616430623435306265 Jan 20 07:00:21.068000 audit: BPF prog-id=261 op=LOAD Jan 20 07:00:21.068000 audit[5764]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2611 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643835363333353633323735633535326639616430623435306265 Jan 20 07:00:21.068000 audit: BPF prog-id=261 op=UNLOAD Jan 20 07:00:21.068000 audit[5764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643835363333353633323735633535326639616430623435306265 Jan 20 07:00:21.068000 audit: BPF prog-id=260 op=UNLOAD Jan 20 07:00:21.068000 audit[5764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643835363333353633323735633535326639616430623435306265 Jan 20 07:00:21.068000 audit: BPF prog-id=262 op=LOAD Jan 20 07:00:21.068000 audit[5764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2611 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643835363333353633323735633535326639616430623435306265 Jan 20 07:00:21.078000 audit: BPF prog-id=263 op=LOAD Jan 20 07:00:21.078000 audit: BPF prog-id=264 op=LOAD Jan 20 07:00:21.078000 audit[5782]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2585 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236396534353334646235383033666639343737306635353765373566 Jan 20 07:00:21.078000 audit: BPF prog-id=264 op=UNLOAD Jan 20 07:00:21.078000 audit[5782]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2585 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236396534353334646235383033666639343737306635353765373566 Jan 20 07:00:21.079000 audit: BPF prog-id=265 op=LOAD Jan 20 07:00:21.079000 audit: BPF prog-id=266 op=LOAD Jan 20 07:00:21.079000 audit[5782]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2585 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236396534353334646235383033666639343737306635353765373566 Jan 20 07:00:21.079000 audit: BPF prog-id=267 op=LOAD Jan 20 07:00:21.079000 audit[5782]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2585 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236396534353334646235383033666639343737306635353765373566 Jan 20 07:00:21.079000 audit: BPF prog-id=267 op=UNLOAD Jan 20 07:00:21.079000 audit[5782]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2585 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236396534353334646235383033666639343737306635353765373566 Jan 20 07:00:21.079000 audit: BPF prog-id=266 op=UNLOAD Jan 20 07:00:21.079000 audit[5782]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2585 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236396534353334646235383033666639343737306635353765373566 Jan 20 07:00:21.079000 audit: BPF prog-id=268 op=LOAD Jan 20 07:00:21.079000 audit[5770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3007 pid=5770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623139396365363763313936313137326432383364343262323863 Jan 20 07:00:21.079000 audit: BPF prog-id=268 op=UNLOAD Jan 20 07:00:21.079000 audit[5770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=5770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623139396365363763313936313137326432383364343262323863 Jan 20 07:00:21.079000 audit: BPF prog-id=269 op=LOAD Jan 20 07:00:21.079000 audit[5782]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2585 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236396534353334646235383033666639343737306635353765373566 Jan 20 07:00:21.079000 audit: BPF prog-id=270 op=LOAD Jan 20 07:00:21.079000 audit[5770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3007 pid=5770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623139396365363763313936313137326432383364343262323863 Jan 20 07:00:21.081000 audit: BPF prog-id=271 op=LOAD Jan 20 07:00:21.081000 audit[5770]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3007 pid=5770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623139396365363763313936313137326432383364343262323863 Jan 20 07:00:21.081000 audit: BPF prog-id=271 op=UNLOAD Jan 20 07:00:21.081000 audit[5770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=5770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623139396365363763313936313137326432383364343262323863 Jan 20 07:00:21.081000 audit: BPF prog-id=270 op=UNLOAD Jan 20 07:00:21.081000 audit[5770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=5770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623139396365363763313936313137326432383364343262323863 Jan 20 07:00:21.081000 audit: BPF prog-id=272 op=LOAD Jan 20 07:00:21.081000 audit[5770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3007 pid=5770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:00:21.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623139396365363763313936313137326432383364343262323863 Jan 20 07:00:21.132262 containerd[1680]: time="2026-01-20T07:00:21.132223563Z" level=info msg="StartContainer for \"81d85633563275c552f9ad0b450be70da35916f36f2ac4cbf1d363fda8c44e57\" returns successfully" Jan 20 07:00:21.135937 containerd[1680]: time="2026-01-20T07:00:21.135876214Z" level=info msg="StartContainer for \"72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7\" returns successfully" Jan 20 07:00:21.152193 containerd[1680]: time="2026-01-20T07:00:21.152162071Z" level=info msg="StartContainer for \"269e4534db5803ff94770f557e75fbf799e913ffdd2b6cb31dd5b0959ed01aee\" returns successfully" Jan 20 07:00:21.650530 kubelet[2906]: E0120 07:00:21.650151 2906 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.92:54500->10.0.0.94:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-597cf6c9f4-c4gmt.188c5e11db3014ef calico-apiserver 1933 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-597cf6c9f4-c4gmt,UID:52f9dd62-5a28-4d34-8a7e-35c040c0ecfe,APIVersion:v1,ResourceVersion:794,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4585-0-0-n-f719bce5cf,},FirstTimestamp:2026-01-20 06:56:24 +0000 UTC,LastTimestamp:2026-01-20 07:00:11.211712031 +0000 UTC m=+290.123349791,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4585-0-0-n-f719bce5cf,}" Jan 20 07:00:26.210652 kubelet[2906]: E0120 07:00:26.210597 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-c4gmt" podUID="52f9dd62-5a28-4d34-8a7e-35c040c0ecfe" Jan 20 07:00:27.348890 containerd[1680]: time="2026-01-20T07:00:27.348808306Z" level=info msg="container event discarded" container=bd9a0a03fa97018adbacfd0b0a17e5dcd3e503ecaccef3364e999f8ea6f65cea type=CONTAINER_CREATED_EVENT Jan 20 07:00:27.348890 containerd[1680]: time="2026-01-20T07:00:27.348871838Z" level=info msg="container event discarded" container=bd9a0a03fa97018adbacfd0b0a17e5dcd3e503ecaccef3364e999f8ea6f65cea type=CONTAINER_STARTED_EVENT Jan 20 07:00:27.374119 containerd[1680]: time="2026-01-20T07:00:27.374045960Z" level=info msg="container event discarded" container=e9fa7ce628608af68d12726f29666a2172a9dfaf35a58545db29340ed02ffdd7 type=CONTAINER_CREATED_EVENT Jan 20 07:00:27.435249 containerd[1680]: time="2026-01-20T07:00:27.435165985Z" level=info msg="container event discarded" container=27339f3b8e78a1e8399a16e975022504f4d4ac36510d4e4116b0d2a4899911f6 type=CONTAINER_CREATED_EVENT Jan 20 07:00:27.435249 containerd[1680]: time="2026-01-20T07:00:27.435220236Z" level=info msg="container event discarded" container=27339f3b8e78a1e8399a16e975022504f4d4ac36510d4e4116b0d2a4899911f6 type=CONTAINER_STARTED_EVENT Jan 20 07:00:27.460478 containerd[1680]: time="2026-01-20T07:00:27.460404329Z" level=info msg="container event discarded" container=e9fa7ce628608af68d12726f29666a2172a9dfaf35a58545db29340ed02ffdd7 type=CONTAINER_STARTED_EVENT Jan 20 07:00:27.666113 kubelet[2906]: I0120 07:00:27.666075 2906 status_manager.go:890] "Failed to get status for pod" podUID="6e4b701fb4a4e79f892a204f5b24f14a" pod="kube-system/kube-apiserver-ci-4585-0-0-n-f719bce5cf" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.92:54608->10.0.0.94:2379: read: connection timed out" Jan 20 07:00:29.839813 containerd[1680]: time="2026-01-20T07:00:29.839748056Z" level=info msg="container event discarded" container=d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b type=CONTAINER_CREATED_EVENT Jan 20 07:00:29.893032 containerd[1680]: time="2026-01-20T07:00:29.892961109Z" level=info msg="container event discarded" container=d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b type=CONTAINER_STARTED_EVENT Jan 20 07:00:30.210477 kubelet[2906]: E0120 07:00:30.210417 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-597cf6c9f4-jx5cs" podUID="de3eadd9-d35e-43b1-acf5-88fe04381bf9" Jan 20 07:00:30.211277 kubelet[2906]: E0120 07:00:30.211239 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79cc8b6859-6sdc2" podUID="eb945f54-1cfd-4248-85ea-34b880e5b4b5" Jan 20 07:00:30.211519 kubelet[2906]: E0120 07:00:30.211496 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8w7k" podUID="506bd27e-3197-4d34-a858-e04017d318df" Jan 20 07:00:30.504813 kubelet[2906]: E0120 07:00:30.504700 2906 request.go:1332] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Jan 20 07:00:30.505402 kubelet[2906]: E0120 07:00:30.505077 2906 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Jan 20 07:00:31.210383 kubelet[2906]: E0120 07:00:31.210180 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-676fb446dd-r6rmf" podUID="e2f4d2c7-d335-42bb-9262-2a522436304e" Jan 20 07:00:32.210552 kubelet[2906]: E0120 07:00:32.210495 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8j6t9" podUID="3f79bfe2-fd9a-4aff-ac23-745eeb4426b7" Jan 20 07:00:32.581251 containerd[1680]: time="2026-01-20T07:00:32.581154677Z" level=info msg="received container exit event container_id:\"72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7\" id:\"72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7\" pid:5810 exit_status:1 exited_at:{seconds:1768892432 nanos:580734224}" Jan 20 07:00:32.582481 systemd[1]: cri-containerd-72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7.scope: Deactivated successfully. Jan 20 07:00:32.584000 audit: BPF prog-id=265 op=UNLOAD Jan 20 07:00:32.587790 kernel: kauditd_printk_skb: 66 callbacks suppressed Jan 20 07:00:32.587909 kernel: audit: type=1334 audit(1768892432.584:928): prog-id=265 op=UNLOAD Jan 20 07:00:32.587932 kernel: audit: type=1334 audit(1768892432.584:929): prog-id=272 op=UNLOAD Jan 20 07:00:32.584000 audit: BPF prog-id=272 op=UNLOAD Jan 20 07:00:32.604192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7-rootfs.mount: Deactivated successfully. Jan 20 07:00:32.982231 kubelet[2906]: I0120 07:00:32.980491 2906 scope.go:117] "RemoveContainer" containerID="d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b" Jan 20 07:00:32.982231 kubelet[2906]: I0120 07:00:32.980717 2906 scope.go:117] "RemoveContainer" containerID="72b199ce67c1961172d283d42b28c1756b2f74211232d2b56b8db06e744510f7" Jan 20 07:00:32.982231 kubelet[2906]: E0120 07:00:32.980857 2906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-xdm6q_tigera-operator(c6825465-f405-4175-9378-73f98628e5ce)\"" pod="tigera-operator/tigera-operator-7dcd859c48-xdm6q" podUID="c6825465-f405-4175-9378-73f98628e5ce" Jan 20 07:00:32.982480 containerd[1680]: time="2026-01-20T07:00:32.982002402Z" level=info msg="RemoveContainer for \"d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b\"" Jan 20 07:00:32.990464 containerd[1680]: time="2026-01-20T07:00:32.990408700Z" level=info msg="RemoveContainer for \"d6e229bf6ed34f361760eb5e751fa66eac7279ded8a93c34903000f81fe7de1b\" returns successfully"